AI agents can make decisions autonomously in certain circumstances. As this sounds potentially dangerous, what risks should you pay particular attention to when using AI agents?
Cedric Klinkert: One known risk is that of hallucinations, which means that AI invents credible but also incorrect answers. This often happens if the AI agent doesn’t know what to do but has been optimised to provide a convincing answer. It therefore helps to explicitly enable the agent to signal if it cannot solve the task. In this context it is very important that an AI agent can only work with data that it is authorised to use. Its permissions should therefore not exceed those of its user. At bbv’s AI hub, for instance, we ensure that only HR employees have access to employee data. AI agents are still a novel technology at present, which can be used in a variety of ways. A systematic risk analysis should therefore be carried out for automated processes.
How can it be ensured that decisions made by AI agents are consistent with laws, ethical guidelines and societal norms?
The character of the AI agent and therefore its statements can be influenced by means of the system prompt. Fine tuning is likewise possible – or control mechanisms can be integrated. However, absolute security cannot be guaranteed. It is therefore essential that an AI agent undergoes a probationary period during which its work is reviewed and approved by subject matter experts. Once we are confident about its work, we can pass such a review on to other independent AI programs.
How can AI agents be prevented from making discriminatory decisions based on biased datasets?
The underlying dataset of a pre-trained model cannot be altered. However, we can check the text input as well as the output of an agent for discriminatory statements and neutralise these.
How can the decision-making processes of AI agents be reproduced transparently, especially in safety-critical industries such as medicine?
Transparency is the top priority in industries such as medicine. bbv’s AI hub uses an approach based on an inner monologue for this purpose. The agent talks to itself and documents why it uses certain tools, while also recording the results and sources of its actions and the next step it is planning. This process allows us to gain insight into the AI agent’s thought processes and decisions, and allows the user to understand and verify the AI decisions. I basically recommend first gaining experience with low-risk applications before tackling safety-critical ones.
What impact will AI agents have on the job market?
The answer to this question depends on the complexity of the tasks that AI agents will be able to handle in the future. An indicator of this is the maximum text length that AI can handle. It reflects the agent’s short-term memory, which it can use to complete its task. I see the greatest potential at present in using an AI agent to automate many low-risk routine tasks.
*This interview first appeared in Netzwoche Issue 10/2023.
Dr. Cedric Klinkert is a software engineer at bbv in the area of embedded applications in the industry and medical technology fields. Together with Marius Högger, he leads bbv’s data science community, promotes the fundamentals of AI and MLOps within bbv and works on applications that use LLMs to develop innovative solutions.