Let’s be clear: far from being experts on the subject, this article does not claim to provide turnkey answers, but it does have the merit of raising a few important questions facing today’s SME directors and HR managers, and providing some food for thought.
From an HR point of view, the question of whether to integrate AI into one’s professional environment is already obsolete. Today, we need to anticipate the “how” and under what conditions, whether it’s a question of performance or recruitment attractiveness.
When technology outpaces human adaptation
Like every major technological advance, AI has aroused as much fear as immoderate enthusiasm and sometimes disproportionate expectations. After the phase of wonder and awe, a few outcries later, debates are refocusing on more pragmatic aspects. In the meantime, AI has made its way into our lives and continues to evolve rapidly, expanding its functionalities and the solutions it offers.
It has already found its way into many business processes, with sometimes disappointing results and its share of slip-ups and unfortunate consequences. So how do you avoid the pitfalls and get the results you want?
Opportunities and risks
Are we delegating the very essence of our humanity – doubt, choice, error – to algorithms capable of predicting our desires? And what if, in this quest for efficiency, we are losing what makes us profoundly alive? … Worse still, if even our mistakes are calculated in advance, where is the pleasure in being humanly imperfect?
From a personal point of view, it’s up to each individual to take advantage of his or her own capacity for reflection and critical thinking – in short, their own free will – to find the ideal balance… In the professional context, on the other hand, HR managers and executives must anticipate the opportunities and risks linked to AI. It is their responsibility to implement a secure integration policy tailored to the company’s real needs, avoiding grey areas as far as possible.
Used properly, AI represents a real lever for improvement and growth, as well as an asset for recruiting young talent. A number of employees are already using generative AI at work, sometimes unbeknownst to company management, with all the risks this entails. According to the results of a survey quoted in PME Mag (references and link at the end of the article), it would appear that in Switzerland 34% of those surveyed admit to having already used a generative AI tool not authorized by their employer, and in some cases passing off these productions as their own.
Technology has expanded the field of possibilities so quickly that companies haven’t had time to adapt, and clear, well-defined guidelines are often lacking. The result is a muddle where AI rules, and where we’re not always sure whether it’s controlling us or the other way around. The problem is a serious one, because the consequences can be just as serious!
So where to start?
… with the right questions, ideally in the right order:
- Opportunities > What can AI bring to my company? (quantitative, qualitative objectives) / At what level? (sector, department, function) / To what extent (dosage, digital – human balance)
- Risks > Avoid grey areas and define risks (legal, qualitative, human, organizational).
- Valuation and means > Define measures and budget (training, explanations, rules, restrictions, moderation, protection filters…)
Don’t hesitate to call on the help of a corporate AI expert!
It can’t be said often enough, successful AI integration requires a clear strategy: aligning tools with real business needs, investing in ongoing employee training and ensuring ethical transparency in data use. By combining vigilance and long-term vision, companies can take advantage of AI while minimizing its negative impacts.
Practical aspects
The general idea is to entrust AI with low value-added tasks, so that we can concentrate on what will really make the difference.
Yes, but… the challenge lies in defining precisely and clearly what is considered to be a low value-added task! It’s also crucial at this stage to define what should be considered as essential tasks that need to take more room for human input, so as to be able to anticipate the necessary reorganizations and training.
Evaluate “low value-added” tasks more effectively
This requires a structured and pragmatic approach:
1. Process mapping
- List all the tasks performed in a function or department.
- Identify the key stages and the people involved.
2. Analyze time and resources
- Evaluate the time spent on each task.
- Measure the effort in terms of resources (human, technological, financial).
3. Assess strategic impact
- Classify tasks according to their contribution to value creation: alignment with corporate objectives, impact on customer satisfaction or financial results.
4. Identify repetitive or routine tasks
- Look for tasks that are repeated frequently and require few specific skills. These are often tasks that can be automated or easily delegated.
5. Gather feedback from employees
- Involve employees to understand frustrations, feelings of usefulness and perceived value of tasks performed.
6. Prioritize actions into three groups
- Automate > repetitive, rule-based tasks
- Simplify or outsource > complex but not very strategic tasks
- Keep > high value-added tasks.
This assessment enables us to focus our efforts on strategic activities, while freeing up time for innovation and creativity.
AI and the law
Legal limits on the use of AI include compliance with data protection regulations, such as the RGPD in Europe, which requires transparency, consent and protection of personal data. Algorithmic biases must be avoided to ensure fairness, and AI must not infringe fundamental rights, such as non-discrimination. In addition, emerging frameworks, such as the European Union’s AI Act, aim to regulate high-risk applications, notably in healthcare, justice and recruitment, to protect citizens and limit abuse.
In Europe: With projects such as the European Union’s AI Act, AI-generated content such as deepfakes or other synthetic media must be explicitly flagged to prevent manipulation and ensure transparency.
Protection tools and moderators
Used to significantly strengthen existing cybersecurity systems, the use of AI at the “novice” level represents a real risk of sensitive data leakage. However, there are already a number of cyber-protection and regulation tools on the market focused on the prevention of potential data leakage, enabling the protection of corporate, customer and employee data, and regulating the use of certain AI tools by eliminating all blind spots.
The HR role in all this
The role of HR as mediator in the integration of AI in companies is crucial: it’s a matter of providing support, reassurance and training, and of setting out a clear framework. Whether on the candidate or employer side, AI training is an asset when hiring, and a valuable aid in personalized ongoing training and in-house talent development. However, it also raises ethical questions: how can we guarantee fairness in algorithmic processes, or keep the human at the center of strategic decisions? The HR manager must take care to make employees aware of the benefits and limits of AI, while promoting a culture of adaptability. By investing in digital skills training and valuing emotional intelligence, he or she can help ensure a smooth, inclusive transition to this new technological era.
Argos Group, Mastery. Clarity. Commitment.