Data resilience and protection in the ransomware age
Data is the currency of every business today, but it is under significant threat. As companies rapidly collect and store data, it is driving a need to adopt multi-cloud solutions to store and protect it. At the same time, ransomware attacks are increasing in frequency and sophistication. This is supported by Rapid7’s Ransomware Radar Report 2024 which states, “The first half of 2024 has witnessed a substantial evolution in the ransomware ecosystem, underscoring significant shifts in attack methodologies, victimology, and cybercriminal tactics.”
Against this backdrop, companies must have a data resilience plan in place which incorporates four key facets: data backup, data recovery, data freedom and data security.
It’s time to treat software -- and its code -- as a critical business asset
Software-driven digital innovation is essential for competing in today's market, and the foundation of this innovation is code. However, there are widespread cracks in this foundation -- lines of bad, insecure, and poorly written code -- that manifest into tech debt, security incidents, and availability issues.
The cost of bad code is enormous, estimated at over a trillion dollars. Just as building a housing market on bad loans would be disastrous, businesses need to consider the impact of bad code on their success. The C-suite must take action to ensure that its software and its maintenance are constantly front of mind in order to run a world-class organization. Software is becoming a CEO and board-level agenda item because it has to be.
The newest AI revolution has arrived
Large-language models (LLMs) and other forms of generative AI are revolutionizing the way we do business. The impact could be huge: McKinsey estimates that current gen AI technologies could eventually automate about 60-70 percent of employees’ time, facilitating productivity and revenue gains of up to $4.4 trillion. These figures are astonishing given how young gen AI is. (ChatGPT debuted just under two years ago -- and just look at how ubiquitous it is already.)
Nonetheless, we are already approaching the next evolution in intelligent AI: agentic AI. This advanced version of AI builds upon the progress of LLMs and gen AI and will soon enable AI agents to solve even more complex, multi-step problems.
The evolution of AI voice assistants and user experience
The world of AI voice assistants has been moving at a breakneck pace, and Google's latest addition, Gemini, is shaking things up even more. As tech giants scramble to outdo each other, creating voice assistants that feel more like personal companions than simple tools,
Gemini seems to be taking the lead in this race. The competition is fierce, but with Gemini Live, we're getting a taste of what the future of conversational AI might look like.
Addressing the demographic divide in AI comfort levels
Today, 37 percent of respondents said their companies were fully prepared to implement AI, but looking out on the horizon, a large majority (86 percent) of respondents said that their AI initiatives would be ready by 2027.
In a recent Riverbed survey of 1,200 business leaders across the globe, 6 in 10 organizations (59 percent) feel positive about their AI initiatives, while only 4 percent are worried. But all is not rosy. Senior business leaders believe there is a generational gap in the comfort level of using AI. When asked who they thought was MOST comfortable using AI, they said Gen Z (52 percent), followed by Millennials (39 percent), Gen X (8 percent) and Baby Boomers (1 percent).
The five steps to network observability
Let's begin with a math problem -- please solve for “X.” Network Observability = Monitoring + X.
The answer is “Context.” Network observability is monitoring plus context. Monitoring can tell the NetOps team that a problem exists, but observability tells you why it exists. Observability gives the Network Operations (NetOps) team real-time, actionable insights into the network’s behavior and performance. This makes NetOps more efficient, which means lower MTTR, better network performance, less downtime, and ultimately better performance for the applications and business that depend on the network. As networks get more complex and IT budgets stay the same size, observability has become very important. In the past two years, I’ve heard the term used by engineers and practitioners on the ground much more often. Gartner predicted that the market for network observability tools will grow 15 percent from 2022 to 2027.
Shining a light on spyware -- how to keep high-risk individuals safe
With elections across the world, there is a tremendous amount of attention placed on the threat posed by AI and digital misinformation. However, one threat we need to have more focus on is spyware.
Spyware has already been used by nation states and governments during elections to surveil political opponents and journalists. For example, the government of Madagascar has been accused of using the technology to conduct widespread surveillance ahead of its elections.
The five stages of vulnerability management
Nearly every organization today builds a lot of software, and the majority of that software is developed by cobbling together open source components. When using open source and trying a software composition analysis (SCA) scanner for the first time, it is not uncommon for those organizations to be surprised at what they learn about their open source usage. Many times it quickly comes to light that they have a large load of new and unplanned work to address in the form of security issues in dependencies. They need to fix these issues not just for the organization itself but also to stay compliant with certifications such as PCI or SOC2.
That’s when these organizations begin to experience the five stages of vulnerability management.
How CISOs should tackle the year of deepfakes
Deepfakes are picking up steam and no one is safe -- not even the President of the United States, who was recently the subject of an election-based audio deepfake scandal. And with an unavoidably heated year ahead with the impending presidential election, I anticipate deepfakes will continue to proliferate.
Deepfakes are a unique cybersecurity topic. They stem from social engineering and are always evolving, but there’s a responsibility for CISOs to position their organizations to combat them.
Closing the gap between cyber risk strategy and execution
Effective cyber risk management is more crucial than ever for organizations across all industries as threat actors are constantly evolving their tactics. Yet, the latest Cyber Risk Peer Benchmarking Report from Critical Start unveils a striking dichotomy between strategy and execution in cyber risk management. While 91 percent of organizations acknowledge the criticality of having a robust risk management strategy, the execution of these strategies appear to fall short.
This gap between cyber risk strategy and execution widens as organizations grow larger. To fully comprehend an organization’s risk and executive strategies effectively, IT leaders must first understand the lifecycle of cyber risk and ensure each stage is addressed.
Identity governance: Balancing cost reduction with effective risk management
Cost reduction is a top priority for many organizations, leading to the adoption of various technologies to automate tasks and improve efficiencies for cost savings. However, minimizing risk should also be a key objective for every business.
To achieve this, companies are looking into Identity Governance and Administration (IGA), which is a policy framework and security solution for automating the creation, management, and certification of user accounts, roles, and access rights. This ensures consistency, efficiency, and improved awareness, all of which are essential for reducing security risks. However, implementing IGA can often be seen as a laborious task that gets abandoned before the business experiences the benefits it has to offer.
Companies aren't 'owning' their data
With a rapidly developing threat landscape, an increase in high-profile data breaches, the introduction of new legislation, and customer tolerance for poor data handling at an all-time low, the stakes are high for companies to have robust cybersecurity in place. However, despite their best efforts, companies are often found to not be doing enough to protect their assets.
Often, this is due to a case of ‘too much, too fast’. As businesses invest in new technologies, their day-to-day operations are being supported by ever more complex and fragmented technology platforms. At the same time, the amount of customer data available to them is growing and constantly streaming in, and bad actors are consistently launching more sophisticated attacks. Meanwhile, leaders are not fully aware of or own responsibility for their cybersecurity plans. As the digital world evolves with new threats and regulations, business leaders must recognize the importance of data protection. If they do not, they cannot adequately protect their customer's data and are in danger of losing their trust and even their continued existence in business.
Why businesses can't go it alone over the EU AI Act
When the European Commission proposed the first EU regulatory framework for AI in April 2021, few would have imagined the speed at which such systems would evolve over the next three years. Indeed, according to the 2024 Stanford AI Index, in the past 12 months alone, chatbots have gone from scoring around 30-40 percent on the Graduate-Level Google-Proof Q&A Benchmark (GPQA) test, to 60 percent. That means chatbots have gone from scoring only marginally better than would be expected by randomly guessing answers, to being nearly as good as the average PhD scholar.
The benefits of such technology are almost limitless, but so are the ethical, practical, and security concerns. The landmark EU AI Act (EUAIA) legislation was adopted in March this year in an effort to overcome these concerns, by ensuring that any systems used in the European Union are safe, transparent, and non-discriminatory. It provides a framework for establishing:
The $13 billion problem: Tackling the growing sophistication of account takeovers
Fraudsters have used account takeovers (ATOs) to victimize 29 percent of internet users, resulting in $13 billion in losses in 2023. Over three-quarters of security leaders listed ATOs as one of the most concerning cyber threats, and the danger grows as bad actors leverage AI to launch more potent attacks.
The Snowflake breach demonstrates the devastating consequences of ATOs. Attackers gained access to 165 of the data platform’s customers’ systems, including AT&T and Ticketmaster, and exfiltrated hundreds of millions of records containing sensitive data. The attack wasn’t some brilliant hacking scheme -- the bad actors simply used legitimate credentials to log into the platform.
Why third-party email filters may be ineffective in Microsoft 365 environments
Because email is the primary source of initial entry in many breaches, many organizations pay for sophisticated, third-party email filtering solutions on top of the protections afforded by Microsoft 365. This is a wise investment; having layers of protection by different vendors helps eliminate blind spots found in any one vendor solution and provides complexity that can foil attack attempts.
Yet, few know that threat actors can easily bypass these third-party filtering products by directing emails to onmicrosoft.com domains that are an inherent part of the Microsoft 365 configuration.
© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.