Ensuring transparency when deploying AI [Q&A]

There are many factors to consider when deploying AI into an organization, not least of which is maintaining transparency and trust in the process.

We spoke to Iccha Sethi, VP of engineering at Vanta, to learn more about why transparency is so important and how governments and enterprises are responding to this challenge.

BN: Why are trust and transparency in AI so important for businesses today?

IS: Trust and transparency in AI aren't optional anymore. They're essential to long-term business success, especially as AI-driven threats continue to rise. And they are indeed rising.

According to Vanta's State of Trust 2024 report, AI-based cyber attacks emerged as the top threat to businesses last year, with phishing attacks at 33 percent and AI-driven malware at 32 percent.

Despite this fact, only 40 percent of organizations conduct regular AI risk assessments, and only 36 percent have an established AI policy.

This gap between awareness and action leaves companies exposed. When businesses prioritize transparency and accountability in AI, they're not just boosting security -- they're building trust with customers and partners, laying a foundation for lasting success.

BN: How are governments responding to the challenges of AI, and what does this mean for companies?

IS: The rapid advancements in AI have prompted governments and regulatory bodies to step in. For example, in the United States, the Biden Administration issued Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in 2023, mandating thorough risk assessments and setting standards for responsible AI deployment across federal agencies.

Federal organizations like the National Institute of Standards and Technology (NIST) have introduced AI Risk Management guidance to help organizations navigate AI-related threats. Similarly, The Open Worldwide Application Security Project (OWASP) has created frameworks to educate the industry on the security risks tied to deploying and managing large language models (LLMs).

Meanwhile, the European Union is leading the charge with the AI Act, which took effect last year. This legislation applies to all 27 member states and introduces a groundbreaking approach by classifying AI systems based on their risk levels.

These initiatives are a clear signal: businesses need to adopt responsible AI practices to comply with regulations, prioritize safety, and build trustworthy AI systems.

BN: With these pressures, what steps can organizations take to responsibly develop and deploy AI?

IS: Companies can start by implementing a clear framework of best practices:

  • Conduct assessments to identify potential negative impacts early, especially in high-stakes areas like financial services, where AI could introduce biases if not properly managed.
  • Integrate security and privacy from the outset, using techniques like federated learning or differential privacy to protect sensitive information.
  • Control data access by implementing secure integrations rather than training directly on customer data.
  • Establish strong Data Processing Agreements (DPAs) with third-party providers to ensure data use is properly managed.
  • Prioritize transparency by using interpretable AI models or explainable AI tools, making complex AI decisions more understandable for customers and partners.
  • Give customers control through an informed consent model, allowing them to opt in or out of AI features according to their preferences.

BN: Vanta's report suggests that companies with high AI transparency scores outperform their peers. Why is transparency so important for business success?

IS: Transparency builds trust. Case in point: our research found that companies with high transparency scores saw a 32 percent improvement in customer satisfaction.

Leading analyst firm Gartner also recently predicted that by 2026, organizations that make AI transparency, trust, and security a priority will see a 50 percent boost in AI adoption rates and user acceptance. Similarly, a study by the MIT Sloan Management Review found that companies with high AI transparency scores achieved 32 percent higher customer satisfaction than their peers.

The bottom line is customers want to know how their data is used, and transparency in AI processes allows them to see that companies are handling their information responsibly. It’s not just about protecting against threats; it’s about openly communicating practices.

Companies that prioritize transparency show customers and partners they value ethical AI, which strengthens long-term relationships and fosters trust.

BN: As AI becomes more integrated into business operations, how can companies ensure they’re staying compliant and proactive?

IS: Compliance with AI is an ongoing process. Businesses need to conduct regular risk assessments, audits, and stay updated with standards like NIST's AI Risk Management Framework or certifications like ISO 42001.

Regulations are also evolving quickly, so staying current with frameworks like the EU AI Act is key to reinforcing accountability and reliability in AI systems.

Companies should also test their AI solutions internally before deploying them to customers. This can help identify issues early and demonstrates a commitment to responsible AI use. When companies lead by example, they establish a culture of continuous improvement and transparency in AI practices.

BN: What is your ultimate message to organizations considering AI's role in their future?

IS: Responsible AI practices are essential for building a secure and trustworthy relationship with customers, partners, and regulators. Companies that prioritize transparency, proactive risk management, and customer control over data will be in the best position to succeed in an AI-driven world. AI isn't just about technological advancement -- it's about fostering a sustainable, ethical approach that aligns with customer expectations. As organizations embrace these principles, they not only protect their operations but also lay a foundation for long-term growth and trust.

Image credit: denismagilov/depositphotos.com

© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.