Save $24! Get 'The Code of Honor: Embracing Ethics in Cybersecurity' for FREE


While some professions -- including medicine, law, and engineering -- have wholeheartedly embraced wide-ranging codes of ethics and conduct, the field of cybersecurity continues to lack an overarching ethical standard. This vacuum constitutes a significant threat to the safety of consumers and businesses around the world, slows commerce, and delays innovation.
The Code of Honor: Embracing Ethics in Cybersecurity delivers a first of its kind comprehensive discussion of the ethical challenges that face contemporary information security workers, managers, and executives.
The unseen ethical considerations in AI practices: A guide for the CEO


Artificial Intelligence (AI) is only accelerating its adoption among global corporate enterprises, thus CEOs and business leaders are positioned at the confluence of innovation and ethics, as it relates to implementing AI projects, in their businesses.
While technical prowess and business potential are usually the focus of conversations around AI, the ethical considerations are sometimes overlooked, especially those not immediately obvious. From a perspective that straddles the line of business leadership and technical acumen, there are five critical, yet often missed, ethical considerations in AI practices that should be part of your due diligence in starting any AI projects:
Software engineers feel unable to speak up about wrongdoing at work


A new report from software auditing company Engprax finds 53 percent of software engineers have identified suspected wrongdoing at work but many are reluctant to report it due to fear of retaliation from management.
Of those who have spoken up, 75 percent report facing retaliation the last time they reported wrongdoing to their employers.
AI ethics and innovation for product development


AI ethics are a factor in responsible product development, innovation, company growth, and customer satisfaction. However, the review cycles to assess ethical standards in an environment of rapid innovation creates friction among teams. Companies often err on getting their latest AI product in front of customers to get early feedback.
But what if that feedback is so great and users want more? Now.
Developing AI models ethically: Ensuring copyright compliance and factual validation


When constructing large language models (LLMs), developers require immense amounts of training data, often measured in hundreds of terabytes or even petabytes. The challenge lies in obtaining this data without violating copyright laws or using inaccurate information and avoiding potential lawsuits.
Some AI developers have been discovered collecting pirated ebooks, proprietary code, or personal data from online sources without consent. This stems from a competitive push to develop the largest possible models, increasing the likelihood of using copyrighted training data, causing environmental damage, and producing inaccurate results. A more effective approach would be to develop smart language models (SLMs) with a horizontal knowledge base, using ethically-sourced training data and fine-tuning to address specific business challenges.
Recent Headlines
Most Commented Stories
BetaNews, your source for breaking tech news, reviews, and in-depth reporting since 1998.
Regional iGaming Content
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.