Søg
Close this search box.
28. september 2023
14 Minutes

Social skills in the age of AI

The use of AI and data intelligence opens up numerous opportunities for companies to successfully maintain and further develop their business in the market. Last but not least, this can help alleviate the workforce’s workload. HR developer Dr. Astrid Selke, AI expert Dr. Gerhard Heinzerling, and ChatGPT shed light on the challenges that need to be overcome.
Dr. Astrid Selke and Dr. Gerhard Heinzerling - Social Skills in the age of AI.

Gerhard, you are working with us as an AI expert. Hence the question: in which areas are companies already using AI, which companies are particularly advanced, and what developments do you expect?

How wonderful that we are delving into this rather tricky topic. AI has numerous applications, and often, we don’t even realize that we are already using AI. Let’s take our cell phones as an example. There’s already a lot of AI involved there – especially in image processing. AI comes into play when the phone clarifies noisy images, removes elements from pictures, or simply recognizes people’s faces. However, this is closely linked to data protection. Technically, though, this isn’t a problem at all; we see this in countries like China. There, there are initial attempts to recognize people not only by their faces but by their entire appearance.

Or think of searching the internet. For us, it has become completely routine. But this wouldn’t be possible without AI. To answer your question, however, we can shift our focus from the private use of cell phones to looking at companies. AI has, of course, found its way there too. YouTube videos can be automatically summarized, product quality checks are performed by cameras with AI image recognition, and ChatGPT is currently making a significant leap for many new applications where texts, and soon images and videos, will be automatically analyzed.

What can I expect from AI? I believe that it will find its use in numerous areas. We’ll witness its influence on music, 3D printing, weather forecasts, medical prognoses recommendations, robot control, and much more. The technology is here and evolving rapidly. It is driven by major players, such as Microsoft, Google, and Nvidia. At the same time, a new generation is emerging, studying data science and entering the market with fresh, new skills. The innovations will be diverse and exciting.

What’s still not possible is predicting the lottery numbers (grins).

What opportunities are emerging for companies using AI, and what risks do you see?

Gerhard: The new opportunities are enormous. Companies that are now investing in AI will reap the rewards in a few years. It starts with the small helpers, for example, those that support our time bookings or automatically create call logs. Then it moves on to topics such as automatic code generation, which will perhaps replace one or two programmers in a year or two. Finally, it extends to major issues such as automatic driving and a traffic system controlled by AI.

Automated driving is also often used as an example of the dangers or risks of AI (see article in The Guardian). Of course, you have to be vigilant and alert. However, the good thing about such technical problems is that they can be solved. Once and for all. It might cost a little money, but the problems can be clearly identified and resolved. It’s not the same as eighteen-year-old drivers who drive home drunk from the disco. The problem of driving under the influence is not only much bigger but also not as easily resolved.

But let’s take another example. It’s currently being highlighted in the media that future wars will be fought with drones and that these drones contain AI. That’s true and that’s dreadful. The problem becomes particularly acute when drones are expected to make autonomous decisions about whether or not to fire on a target.

But to this, I can only say. Neither a drone nor the AI in a drone program themselves. It’s us humans who do that. I’ll draw a comparison, which, of course, like all comparisons, is a bit flawed. The AI is like a hammer. You can use it to drive a nail into the wall or bludgeon your neighbor to death. This raises the question: “What’s the hammer got to do with it? Should we abolish all hammers?” Or we compare AI with a telephone. Over a phone, I can speak lovingly to my wife or yell at or threaten someone else. And again, I wonder: “Should we therefore ban all phones?” It’s we humans who decide on technical risks, whether to reduce or increase them.

This is, of course, correct; ultimately, we humans are responsible. Responsibility is also the topic of Astrid, who, as an HR developer, always has people in mind, especially at Arineo.

Astrid, the use of AI often sparks fears among many – do Arineo colleagues have reservations regarding the use of AI or ChatGPT?

AI certainly hasn’t yet gained widespread acceptance in society. Often, media discussions about AI are tied to potential staff reductions. This has recently been observed in the media industry, for example. AI takes over the creative process and, with just a few keywords, independently generates images that would otherwise have been manually created by graphic artists with significant effort. I believe that these developments can indeed provoke fears among the public.

On the other hand, AI also presents many opportunities. We are already actively using AI to minimize our consultants’ repetitive tasks. For example, we have developed AI to support the approval of time bookings. This takes a lot of pressure off our project managers and gives them more freedom to focus on the people in the client projects.

We also use AI to detect data anomalies. This used to be a very labor-intensive, manual job. Today, this is done for us by AI applications, allowing project staff to concentrate more on genuine consultancy with people and their business processes.

As an IT company, we are naturally very open to new technologies. We particularly believe in the success and benefits of AI. Therefore, we haven’t observed significant reservations regarding these technologies among our colleagues. However, we always keep our focus on people. Technologies should support people and provide them with added value. They should be used less to replace people.

What initiatives are being created at Arineo to establish a work environment where the use of AI is welcomed?

Astrid: We started looking at AI very early on. Three years ago, supported by the BSFZ, we established a research group at Arineo that is actively working on data mining and image recognition. This team is experimenting with exciting concepts to identify where AI approaches provide significant value for people.

Within the company itself, we have created an open platform where employees can find out about developments in the AI field. This platform is moderated by our research team, showcasing the latest insights. Of course, ChatGPT has not escaped our attention either. We swiftly responded by providing necessary platforms and company accounts to our employees, enabling the everyday use of ChatGPT. We actively engage with the topic of AI and offer all employees the opportunity to develop and stay informed in these areas.

How do you think the world of work will change in general? Which (soft) skills do you see as being particularly important in the future and how is Arineo meeting these challenges?

Astrid: Interestingly, I saw a post on LinkedIn in July 2023 about a study by Prof. Carsten C. Schermuly. According to this study, the top four competencies for successful work in the new work environment are self-reflection, adaptability, willingness to learn, and personal responsibility. These skills are also seen by us in HR development as crucial considering the expected changes due to new technologies in the work environment.

By eliminating repetitive tasks, our consultants gain time to serve clients. Gradually, they will increasingly transition to understanding the overall processes and platforms at their clients and become specialists in specific customer inquiries in industries, etc. To facilitate this transition, we are already focusing on developing the necessary skills, for example, through workshops such as ‘MapsTell’ or ‘Giving and Receiving Feedback’ to improve self-reflection and create a positive culture around learning from mistakes.

What can schools do? How should curricula be adapted to incorporate the use of new technologies?

Gerhard: One can only encourage all teachers not to ban the new possibilities but rather to utilize them with the students. I still remember well when attempts were made at our school to ban calculators. That didn’t last long, and I see a similar scenario for ChatGPT, Stable Diffusion, and other AI systems. The students may be a little quicker on the uptake, but the correct use of the new tools also needs to be learned. Here, teachers are highly challenged and must put in the effort if they don’t want to be left behind.

Which skills should be trained in schools today to best prepare students for the changed conditions in the world of work?

Gerhard: The skills that I have just mentioned include, of course, the functional use of the tools. However, that’s the easier part as many tools are very easy to use. The much more challenging part is the substantive use of the new possibilities.

Consider the enormous issue of “fake news”. With the new AI tools, almost any teenager can generate images and text as they please. Bots can do this too. The hard part now is figuring out who I can trust or rely on. Which news is accurate? Which images are real? What is fake? Here, we almost need a skeptical attitude. I mean in the best sense of skepticism, which doesn’t simply doubt everything but systematically and critically questions. But honestly, who really does that? The images we see on social media are viewed for about a second. Who actually takes the time to ask if this is real or true? An AI can generate and circulate millions of artificial images. This will become important not only for students but for all of us, for our democracy, and for our common understanding of truth. There are now the first professions with such great names as fact-checkers that appear, for example, at ARD or ZDF. This leads to the next question: who guards the guards?

Is a cooperative organization – i.e. a company without managers – better suited for dealing with the use of new technologies than companies with traditional managers?

Astrid: Our organizational model offers more flexibility in meeting new requirements. Since we work more in a project-oriented organization, we can quickly assemble new teams, such as our research group or a working group for the development of AI for time bookings. We have the advantage of creating strength-based interdisciplinary teams, whereas, in other companies, discussions are still ongoing about departmental power structures and personal goals.

Thank you for your assessment.

Last but not least, we asked ChatGPT directly how he/she/it sees it

In which areas are companies currently using AI, which companies are at the forefront, and what developments do you expect?

Companies are already using artificial intelligence (AI) in a variety of areas. Here are some of the most prominent application areas:

  • Customer Service: many companies are using AI-driven chatbots and virtual assistants to improve customer service. These AI systems can answer frequently asked questions, handle support inquiries, and help customers solve problems.
  • Personalized advertising and recommendations: AI is used to create personalized advertising and product recommendations. By analyzing customer data and behavior patterns, companies can create tailored offers and strengthen customer loyalty.
  • Healthcare: AI has applications in medical imaging, diagnostic support, and analysis of large medical datasets. Machine learning can detect diseases faster and develop more precise treatments.
  • Logistics and supply chain: AI systems are used to optimize supply chains, improve demand forecasting, and plan routes for the efficient transportation of goods. This helps to reduce costs and improve delivery times.
  • Financial services: AI-based algorithms are used for fraud prevention, risk analysis, credit scoring, and automated trading. They can identify complex data patterns and make informed decisions.

Some companies are already well-advanced in their use of AI. These include tech giants such as Google, Microsoft, Amazon, and IBM, which are integrating AI into their products and services. Companies like Tesla are using AI in self-driving cars, while companies like Netflix and Spotify are using AI to provide personalized content and music recommendations.

In terms of future development, it is expected that the use of AI will continue to increase. Advancements in machine learning, data analytics, and computing power will enable companies to develop even more powerful AI systems. We can expect increased automation in various industries, leading to increased efficiency, new business opportunities, and an improved customer experience. At the same time, the discussion about ethical aspects and the responsible use of AI will continue to gain importance.

In which areas are companies currently using AI, which companies are at the forefront, and what developments do you expect?

The use of AI opens up a number of opportunities for companies:

  • Efficiency improvement: AI can automate repetitive tasks, leading to increased efficiency and cost savings. Companies can free up resources and focus on more complex and value-adding tasks.
  • Personalization: AI enables personalized customer targeting and tailored offers. Companies can better understand customer needs and offer more customized products or services, potentially leading to higher customer satisfaction and retention.
  • Data Analysis: with AI, companies can analyze large amounts of data and gain valuable insights. This enables more informed decisions, more accurate forecasts, and improved business strategy.
  • Innovation: AI opens up new avenues for innovation in various industries. By using AI, companies can develop new products, services, and business models that can lead to competitive advantages.

Despite the opportunities, there are also risks associated with the use of AI:

  • Data privacy and security: The use of AI often requires access to large amounts of data. Companies must ensure that this data is adequately protected to avoid data breaches and security risks.
  • Ethical issues: AI can potentially pose ethical dilemmas, for example, such as those concerning privacy, discrimination, job loss, and dependence on technology. Companies must ensure that the use of AI is ethically responsible and aligns with societal values.
  • Trust and acceptance: the use of AI can lead to concerns and mistrust among customers, employees, and the public. Companies need to transparently communicate how AI will be used and how the associated risks will be addressed to build trust and foster acceptance.
  • Error-proneness and accountability: AI systems are not flawless and can make mistakes. Companies must have mechanisms in place to detect and correct such errors and take responsibility, especially in safety-critical areas such as autonomous vehicles or medical diagnostics.

It is important for companies to proactively address these risks by implementing clear policies, ethical guidelines, and mechanisms to monitor and control AI use. This can maximize opportunities and minimize potential negative impacts.

The use of AI often raises fears among people – are there any reservations among colleagues at Arineo regarding the use of AI or ChatGPT?

As an AI model, I don’t have direct insight into the internal affairs of companies like Arineo. I can say, however, that the introduction of AI technologies in companies can often raise concerns and reservations.

Regarding the use of AI or ChatGPT, there are different perspectives and possible reservations that employees might have:

  • Job loss: a common concern is that the use of AI could lead to job losses. If certain tasks are automated, employees may fear that their jobs will become redundant. It is crucial to communicate to employees that AI is often used to support and complement human work, creating new opportunities for other tasks and qualifications.
  • Error-proneness: AI systems are not flawless and can make mistakes. This could raise concerns about the accuracy and reliability of AI applications. Employees might doubt whether they can trust decisions made by AI systems. Here, it is important to ensure clear communication about the strengths and limitations of AI and to introduce monitoring and control mechanisms.
  • Data privacy and ethics: the use of AI often requires access to sensitive data. Employees may have concerns regarding data privacy and ethical use of this data. Implementing transparent policies and measures to safeguard privacy and responsibly handle data is essential.

To mitigate potential reservations and foster acceptance for the use of AI, it’s crucial to engage in open and transparent communication. Training and further education can help employees better understand the benefits and potential of AI. Moreover, it’s important to involve the employees in the implementation process and take their concerns and opinions seriously to find solutions together.