Non-human users pose a serious identity management challenge


Identity and access management (IAM) programs put a lot of emphasis on users, for the obvious reason that compromising user identities is the prime attack vector for ransomware and other threat actors. But while strengthening access and permission controls for employees, third parties and other stakeholders, too many organizations overlook a vast trove of network identities that can be equally vulnerable to attack -- the non-human identities (NHIs) that are proliferating in the cloud.
The number of non-human identities, ranging from API keys and cloud services to DevOps tools and software bots, has exploded in recent years. By some estimates they are now outnumbering human users on the network by 45-1. The tools and services that comprise non-human identities can significantly increase efficiency and productivity, but they greatly expand an organization’s attack surface if they’re not properly protected. And too many of them aren’t protected.
7 best infrastructure monitoring tools for your business


Cloud-native application infrastructure serves as a powerful equalizer, enabling idea-driven teams to access the resources they need to bring their visions to life, no matter their location. However, this convenience introduces new layers of complexity. Each component of the multifaceted structures that underpin modern applications requires careful observation, orchestration, and optimization.
The right infrastructure monitoring tools can play a crucial role in helping your organization achieve higher efficiency and minimize downtime. Instead of relying on resource-draining, custom-built solutions, these tools offer a unified approach that acts as a force multiplier for your teams. Just like cloud development itself, well-chosen infrastructure monitoring software empowers you to achieve more with less. Here's how to select the best tools for your needs.
Building a security-first culture for MSPs: Always ready, always protected


For IT professionals and MSPs, a company’s security posture is influenced not only by technology but also by its team's daily actions. Whether intentional or accidental, human behavior plays a significant role in either fortifying or undermining security measures.
Verizon Business’ 2024 Data Breach Investigation Report revealed that 68 percent of breaches this year involved a non-malicious human element, such as people falling for phishing schemes, mishandling sensitive information or getting tricked by a social engineering ploy.
The biggest mistake organizations make when implementing AI chatbots


Worldwide spending on chatbots is expected to reach $72 billion by 2028, up from $12 billion in 2023, and many organizations are scrambling to keep pace. As companies race to develop advanced chatbots, some are compromising performance by prioritizing data quantity over quality. Just adding data to a chatbot’s knowledge base without any quality control guardrails will result in outputs that are low-quality, incorrect, or even offensive.
This highlights the critical need for rigorous data hygiene practices to ensure accurate and up-to-date conversational AI software responses.
A technical overview of Cisco IoT part 5: Exploring Cisco's competition in the expanding IoT landscape


The fifth and final article in the Cisco IoT series explores how Cisco’s competitors are navigating the rapidly expanding Internet of Things (IoT) landscape. Building on the previous installment, which covered Cisco Meraki, training resources, and certification pathways, this article shifts focus to examine how other key players in the networking industry are positioning themselves to capitalize on the IoT revolution.
IoT is transforming industries, with use cases expanding across healthcare, retail, manufacturing and beyond. It’s reshaping how organizations operate, offering enhanced security, cost savings and new capabilities through advanced sensor technologies. As this sector evolves, Cisco and its competitors are racing to offer innovative solutions. But how do Cisco’s competitors stack up when it comes to IoT? This article takes a closer look at Juniper, Aruba, Arista and other vendors to assess differing IoT strategies and where each stands in comparison.
SIEM is the shortcut for implementing threat detection best practices


The recent release of “Best Practices for Event Logging and Threat Detection” by CISA and its international partners is a testament to the growing importance of effective event logging in today’s cybersecurity landscape. With the increasing sophistication and proliferation of cyber attacks, organizations must constantly adapt their security strategies to address these advanced threats. CISA’s best practices underscore how a modern SIEM (Security Information and Event Management) solution, especially one equipped with UEBA (User and Entity Behavior Analytics) capabilities, is critical for organizations trying to adopt the best practices in this domain.
A modern SIEM with UEBA can help organizations streamline their event logging policies. It automates the collection and standardization of logs across diverse environments, from cloud to on-premise systems, ensuring that relevant events are captured consistently. This aligns with CISA’s recommendation for a consistent, enterprise-wide logging policy, which enhances visibility and early detection of threats. We've seen a rise in detection and response technologies, from Cloud Detection and Response (CDR) to Extended Detection and Response (XDR) being positioned as alternatives to SIEM. However, when it comes to consistently capturing and utilizing events across diverse environments, SIEM remains the preferred solution for large organizations facing these complex challenges.
From classical to quantum: A new era in computing


Data is a business’s most critical asset, and companies today have more data than ever before. IDC projects that by 2025, the collective sum of the world’s data will reach 175 zettabytes. This data has immense potential to be leveraged for informed decision making, but across industries, organizations struggle to harness the power of their data effectively due to the limitations of traditional computing technologies. These systems are often lacking in speed, accuracy, and energy efficiency, making it increasingly difficult for businesses to harness valuable insights. The need for more powerful computing solutions is becoming urgent as businesses grapple with the ever-growing complexity and volume of data.
Enter quantum computing, which addresses these limitations by providing a powerful alternative. Representing a significant leap forward from classical computing, quantum computing offers unprecedented speed and problem-solving capabilities. Traditional computers process information using bits, which can only be in a state of one or zero. In contrast, quantum computing uses quantum bits, or qubits, which leverage the principles of superposition and entanglement. Qubits can exist in multiple states simultaneously -- both one and zero and everything in between -- allowing quantum computers to perform operations much faster than classical systems.
The importance of nudge theory in email security


It is estimated that people make 35,000 decisions every day -- or, to break that number down, one decision every two seconds. That’s not to say that each decision has a big impact, most are small and often instinctive, like taking a sip of coffee, turning the work laptop on, and clicking a hyperlink in an email.
In fact, it is that instinctive use of email that can lead to cyberattacks and data breaches. Email is the backbone of business communication. Despite remote and hybrid work driving the adoption of messaging apps and video conferencing, four out of five employees say email is their preferred way to communicate.
Weathering the alert storm


The more layers a business adds to its IT and cloud infrastructure, the more alerts it creates to detect issues and anomalies. As a business heads towards a critical mass, how can they prevent DevOps teams from being bombarded by ‘alert storms’ as they try to differentiate between real incidents and false positives?
The key is to continuously review and update an organization's monitoring strategy, specifically targeting the removal of unnecessary or unhelpful alerts. This is especially important for larger companies that generate thousands of alerts due to multiple dependencies and potential failure points. Identifying the ‘noisiest’ alerts, or those that are triggered most often, will allow teams to take preventive action to weather alert storms and reduce ‘alert fatigue’ -- a diminished ability to identify critical issues.
As the workforce trends younger, account takeover attacks are rising


Account Takeover (ATO) incidents are on the rise, with one recent study finding that 29 percent of US adults were victims of ATO attacks in the past year alone. That isn’t necessarily surprising: what we call an “Account Takeover attack” usually comes as the result of stolen credentials -- and this year’s Verizon Data Breach Investigations Report (DBIR) noted that credential theft has played a role in a whopping 31 percent of all breaches over the past 10 years. Basically, an ATO happens when a cybercriminal uses those stolen credentials to access an account that doesn’t belong to them and leverages it for any number of nefarious purposes.
Those credentials can come from anywhere. Yes, modern attackers can use deepfakes and other advanced tactics to get their hands on credentials -- but the truth is, tried-and-true methods like phishing and business email compromise (BEC) attacks are still remarkably effective. Worse still, because people tend to reuse passwords, a single set of stolen credentials can often lead to multiple compromised accounts. As always, human beings are the weakest point in any system.
RTOS vs Linux: The IoT battle extends from software to hardware


There’s certainly something happening regarding operating systems in the Internet of Things (IoT). Chips are getting smarter, devices are getting smaller, and speeds are getting faster. As a result, device developers are more often experimenting with their operating system of choice, moving away from Linux and toward real-time operating systems (RTOS).
This is an evolution on two fronts. On the software side, applications requiring low latency and deterministic responses are turning to Zephyr, FreeRTOS, and ThreadX. And now, on the hardware side, we’re seeing more chip manufacturers entering the fray with RTOS-specific hardware that rivals or surpasses performance of entry-level Linux boards. This is a big deal since these chips optimize hardware-software integration, creating a more thorough ecosystem for purpose-built solutions with RTOS.
AI for social good: Highlighting positive applications of AI in addressing social challenges -- along with the potential pitfalls to avoid


Depending on who you ask, artificial intelligence could be the future of work or the harbinger of doom. The reality of AI technology falls somewhere between these two extremes. Although there are certainly some use cases of AI technology that could be harmful to society, others have seen the technology substantially improve their productivity and efficiency.
However, artificial intelligence has even more significant implications than improving the productivity of workers and businesses. Some use cases of AI have been proposed that could have profound social implications and address social challenges that are becoming more pressing today.
Data resilience and protection in the ransomware age


Data is the currency of every business today, but it is under significant threat. As companies rapidly collect and store data, it is driving a need to adopt multi-cloud solutions to store and protect it. At the same time, ransomware attacks are increasing in frequency and sophistication. This is supported by Rapid7’s Ransomware Radar Report 2024 which states, “The first half of 2024 has witnessed a substantial evolution in the ransomware ecosystem, underscoring significant shifts in attack methodologies, victimology, and cybercriminal tactics.”
Against this backdrop, companies must have a data resilience plan in place which incorporates four key facets: data backup, data recovery, data freedom and data security.
It’s time to treat software -- and its code -- as a critical business asset


Software-driven digital innovation is essential for competing in today's market, and the foundation of this innovation is code. However, there are widespread cracks in this foundation -- lines of bad, insecure, and poorly written code -- that manifest into tech debt, security incidents, and availability issues.
The cost of bad code is enormous, estimated at over a trillion dollars. Just as building a housing market on bad loans would be disastrous, businesses need to consider the impact of bad code on their success. The C-suite must take action to ensure that its software and its maintenance are constantly front of mind in order to run a world-class organization. Software is becoming a CEO and board-level agenda item because it has to be.
The newest AI revolution has arrived


Large-language models (LLMs) and other forms of generative AI are revolutionizing the way we do business. The impact could be huge: McKinsey estimates that current gen AI technologies could eventually automate about 60-70 percent of employees’ time, facilitating productivity and revenue gains of up to $4.4 trillion. These figures are astonishing given how young gen AI is. (ChatGPT debuted just under two years ago -- and just look at how ubiquitous it is already.)
Nonetheless, we are already approaching the next evolution in intelligent AI: agentic AI. This advanced version of AI builds upon the progress of LLMs and gen AI and will soon enable AI agents to solve even more complex, multi-step problems.
© 1998-2025 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.