Celebrating Data Privacy Day: Ensuring ethical agentic AI in our daily interactions
Both AI agents and agentic AI are becoming increasingly powerful and prevalent. With AI agents, we can automate simple tasks and save time in our everyday lives. With agentic AI, businesses can automate complex enterprise processes. Widespread AI use is an inevitability, and the question going forward is not if we’ll use the technology but how well.
In a world where AI takes on more responsibility, we need to know how to measure its effectiveness. Metrics like the number of human hours saved or the costs reduced are, of course, important. But we also need to consider things like how ethically and securely our AI solutions operate. This is true when adopting third-party solutions and when training AI in house.
Enterprise leaders today should approach AI agents and agentic AI with cautious optimism. The technology has the potential to elevate user experiences and businesses dramatically. It can also erode consumer and employee trust and cause major operational disruption.
In 2025, it’s crucial to view data privacy in AI projects as a central concern rather than an afterthought. In fact, how you handle data and apply it in AI applications can be a positive differentiator. Consumers are more aware than ever of how their data is used. We must be good stewards of the information we’re given -- and prove it to stakeholders every day.
Why Data Privacy is Hard with AI Agents and Agentic AI
A few factors make ensuring ethical, secure, and transparent AI difficult to achieve. First, AI agents and agentic AI require a tremendous amount of data. Large volumes of high-quality data are how we fine-tune AI models to respond appropriately in real-world scenarios. Low-quality or limited data means our AI solutions have less context to use when responding to user inputs or acting towards a goal. Organizations that don’t have sophisticated data management capabilities struggle to maintain control or increase ROI with AI solutions.
AI is also imperfect. Models can hallucinate or produce biased results. AI agents and agentic AI are designed to make probable decisions based on training data -- if the training data is off base, outputs will be as well. The problem is it’s hard for many businesses to know the quality of the underlying data or training methods used. And standing up new large language models (LLMs) or RAG systems is out of reach for most organizations, which is why out-of-the-box solutions and foundational models are so popular today.
Fortunately, leaders can implement systems and practices to help mitigate these issues. As AI technology advances, taking proactive steps to address potential risks, protect user data, and gain public trust is paramount.
Key Data Privacy Strategies for AI Agents and Agentic AI
One of the best ways to promote ethical AI use is to plan with data privacy in mind. Rather than layer on privacy and security after the fact, leaders should think first about all aspects related to data management:
- What data do we need?
- How will we store it?
- How will we secure it?
- Who will have access?
- How will it be used by AI?
- How long will we keep it?
Answering these types of questions at the outset of a new AI project helps keep data privacy and ethics top of mind for everyone involved.
Leaders should also provide ample training for technical and non-technical staff. Anyone who works with AI in some capacity within the organization should know how to use the technology properly. This includes covering topics like what is appropriate to include in prompts to publicly available LLMs and how to store content generated by AI.
Technology leaders should also revisit the right AI adoption approach. Creating from scratch, building from foundational solutions, and adopting out-of-the-box AI products are all viable options. The right path depends on the AI capabilities in the business, as well as understanding of how to ensure data privacy and security within the guardrails of the solution adopted.
Adding human checks and balances into the mix is another way to keep AI honest. Teams should test AI agent and agentic AI performance regularly. Any critical decisions made by AI could also go to a human decision maker first before getting used.
Finally, real-time monitoring and compliance are important. The faster teams can identify opportunities, risks, and problems associated with AI agents or agentic AI, the better. This level of transparency is especially important for organizations that operate in private and public clouds where data moves between the two.
It’s common for executives today to view ethical AI and data privacy as someone else’s problem or as an obstacle to growth. The opposite is true. Any organization that uses AI agents or agentic AI is responsible for ensuring proper use and taking care of stakeholders who are affected by AI decisions. Striking the right balance between innovation and ethical AI operation will be a key differentiator going forward and something that will impact our lives positively for years to come.
Carolyn Duby is Field CTO, Cloudera.
Image Credit: Cttpnetwork / Dreamstime.com