Addressing workers' concerns about AI
Artificial intelligence (AI) and machine learning (ML) solutions are being adopted across every industry today. Quite often, these initiatives involve deploying ML models into operational settings where the model output ends up being a widget on the screens or a number on the reports that are put in front of hundreds, if not thousands, of front-line employees. These could be underwriters, loan officers, fraud investigators, nurses, teachers, claims adjusters, or attorneys. No industry is immune to these transformations.
These initiatives are typically driven from the top down. Management monitors and looks for ways to improve KPIs, and increasingly, AI/ML initiatives are identified as a means to this end. Certainly, there’s plenty of communication among executive, finance, data science, and operational leaders about these initiatives. Unfortunately, in many of the organizations I’ve worked with, the group of folks who are most commonly left out of the discussion are the front-line employees.
The prompt plays a critical role in crafting emails with LLMs
In the realm of digital communication, crafting the perfect email is both an art and a science, especially when the goal is to convert that email into a meeting or a tangible outcome. With the advent of Large Language Models (LLMs) like GPT (Generative Pre-trained Transformer), the stakes have been raised, offering unprecedented opportunities for personalization, efficiency, and effectiveness in email outreach. At the heart of this revolution lies a seemingly simple yet profoundly impactful element: the prompt.
A prompt, in the context of LLMs, is more than just a starting point for generating text; it's the steering wheel that guides the AI in a specific direction, ensuring that the output aligns with the sender's intentions, tone, and objectives. The importance of prompts becomes even more pronounced when considering the goal of converting an email into a meeting -- a task that requires precision, personalization, and persuasion. Prompts provide:
Your company needs a BEC policy and five other email security trends
Hardly a week goes by without news of another email-based attack via phishing or Business Email Compromise (BEC) scam. These types of attacks can cause a great deal of damage to infrastructure and an organization’s image, whether it is a large enterprise, a small-medium business (SMB) or even much smaller retailers. The FBI (Federal Bureau of Investigation) reports that the average financial loss per BEC attack is $125,000 and last year estimated the Business Email fraud industry to be valued at a whopping $50 billion.
These attacks are increasingly creative, and typically involve impersonation of someone such as the head of an organization or finance. If someone responds on behalf of the executive, they could unknowingly give away the keys to the kingdom, causing significant losses. With that in mind, let’s review some of the larger email security trends.
Securing AI copilots is critical -- here's how
The use of AI copilots is already helping businesses save time and gain productivity. One recent study found that employees who gain proficiency using copilots saved 30 minutes a day, the equivalent of 10 hours a month, while the average employee saved 14 minutes a day, or nearly five hours each month.
AI copilots essentially allow people to interact with business productivity tools for greater efficiency. You can ask these tools questions, synchronize data and perform automated actions in an easier and better way. In the survey referenced above, 70 percent of users reported greater productivity while 68 percent said it improved the quality of their work. However, while the business benefits are significant, these copilots can also introduce new security risks that organizations must be aware of -- and have a plan for.
Is over-focusing on privacy hampering the push to take full advantage of AI?
In 2006, British mathematician Clive Humby declared that data is the new oil -- and so could be the fuel source for a new, data-driven Industrial Revolution.
Given that he and his wife helped Tesco make £90m from its first attempt at a Clubcard, he should know. And it looks like the “derricks” out there are actually pumping that informational black gold up to the surface: the global big data analytics market is predicted to be more than $745bn by 2030 -- and while it may not be the most dependable metric, Big Tech is throwing billions at AI at a rate described as “some of the largest infusions of cash in a specific technology in Silicon Valley history”.
Understanding the risks of integrating GenAI in GRC programs: A framework for compliance teams
NIST's recent AI Risk proposal, AI RMF Generative AI Profile, aims to assist organizations in comprehending AI risks internally and from third-party vendors. While GenAI adoption is on the rise across various sectors, compliance managers are more cautious about incorporating AI into their compliance programs. Despite all the hype about AI, a survey conducted by The Wall Street Journal among approximately 300 compliance professionals revealed that only one-third currently incorporate GenAI within their compliance programs.
Collaborative efforts between entities like NIST and prominent organizations including OpenAI and Microsoft are underway to expedite the development of standards and recommendations for the responsible deployment of AI. Amidst grappling with the implementation of GenAI, it becomes imperative to understand how third parties are integrating this technology to better evaluate corporate risk, consequently enhancing regulatory and compliance reporting.
How to block bad actors and become more cyber resilient
As a wise man once said, a failure to plan is a plan to fail. This is especially true in the world of cybersecurity, where it is all but inevitable that an organization will face a security incident.
According to the 2024 Data Protection Trends report from Veeam, ransomware is the leading type of cyber crime, due to its lucrative nature. Cyber criminals have found that stealing, encrypting and selling data back to their victims is highly profitable, which has led to ransomware becoming a billion-dollar industry. Between ransom payments, maintenance, and lost business due to downtime, the average ransomware attack costs a business around £3.5 million.
The double-edged sword of AI in cybersecurity
As artificial intelligence (AI) continues to advance, its impact on cybersecurity grows more significant. AI is an incredibly powerful tool in the hands of both cyber attackers and defenders, playing a pivotal role in the evolving landscape of digital threats and security defense mechanisms. The technology has seen use both by attackers to conduct cyber attacks, and defenders to deter and counter threats.
The incorporation of AI into malicious social engineering campaigns creates a new era where cyber threat actors are more convincingly deceptive. With access to a vast amount of data, cyber threat actors can both increase the success and effectiveness of large-scale phishing campaigns, or use this access to huge amounts of data to spread disinformation online.
GDPR -- easy as ABC with DLP
Regulation, compliance, and security always entwine themselves into modern day discussions around the latest innovations and technological advancements. Most recently, the fanfare around AI has quickly given rise to conversations about how it is impacting companies’ ability to comply with the General Data Protection Regulation (GDPR).
GDPR demands that companies stick within data guardrails, yet 100 percent compliance can often seem like a thin tightrope on which companies must balance. Fortunately, various technologies exist that can help with this, such as Data Loss Prevention (DLP).
Generative AI: Productivity dream or security nightmare?
The field of AI has been around for decades, but its current surge is rewriting the rules at an accelerated rate. Fueled by increased computational power and data availability, this AI boom brings with it opportunities and challenges.
AI tools fuel innovation and growth by enabling businesses to analyze data, improve customer experiences, automate processes, and innovate products -- at speed. Yet, as AI becomes more commonplace, concerns about misinformation and misuse arise. With businesses relying more on AI, the risk of unintentional data leaks by employees also goes up. For many though, the benefits outweigh any risks. So, how can companies empower employees to harness the power of AI without risking data security?
Biometrics explained: Breaking down the technology's controversy and contributions to security
Advancements in technology within the last decade have sparked the increased use of digital biometric verification. The technology’s modern verification capabilities have outpaced traditional cybersecurity attack methods geared toward credentials theft -- making the technology an attractive enhancement for corporations seeking to provide a more secure, seamless experience for users to verify their identities. Now, users can leverage biometric technology for secure access to critical information, such as applications in financial and healthcare sectors.
However, recent pushback from the Federal Trade Commission (FTC) on the use of biometrics for identity verification, particularly age verification, highlights compliance concerns surrounding enterprises’ data collection and storage practices -- especially the collection of minors’ biometric information.
Measuring AI effectiveness beyond productivity metrics
Last year was an AI milestone marked by enthusiasm, optimism, and caution. AI-powered productivity tools promise to boost productivity by automating repetitive coding and tedious tasks and generating code. A year later, organizations are struggling to quantify the impact of their AI initiatives and are reevaluating metrics to ensure they reflect the desired business outcomes.
Measuring developer productivity has historically been a challenge, with or without the introduction of AI-powered developer tools. Last year, McKinsey & Company described developer productivity measurement as a “black box,” noting that in software development, “the link between inputs and outputs is considerably less clear” than other functions.
Why the CHIPs act is the lifeline US tech desperately needs
In the next five to ten years, the United States faces a critical juncture in its technological trajectory, heavily influenced by the implications of the CHIPs Act. As a seasoned venture capitalist and Managing Director of Venture Labs, I have closely monitored the evolution of technology and innovation ecosystems. The CHIPs Act represents more than just a policy shift; it is a strategic maneuver poised to revolutionize the hardware industry, foster innovation, and bolster national security.
Historically, hardware production has been dominated by a few key players, leading to centralized control that stifles competition and innovation. The CHIPs Act aims to dismantle this concentration, decentralizing hardware production and empowering a diverse array of developers. This shift is crucial not only for fostering competition but also for driving technological advancements. By creating an environment where smaller companies can thrive, we can expect a surge in innovative solutions that address emerging challenges across various industries.
How shifting information left can empower developers and accelerate innovation
Development teams are increasingly seen as the engine room of the modern digital enterprise, tasked with building the new services and capabilities that the business needs to thrive. However, with resources stretched to their limit, organizations must find a way to empower their developers to work more productively, so they can deliver newer, better digital capabilities faster and more reliably. If they fail to do so, it will be more difficult to keep pace with market demands, and many will see their competitors gain the advantage.
In response, organizations are increasingly adopting a shift left approach, to ensure that new code is tested earlier in the software development lifecycle (SDLC). This reduces the risk that code could contain errors or vulnerabilities that lead to delayed innovation, as applications or features are rolled back to be reworked by developers. But shift left should not be about moving extra work “left” in the SDLC, or demanding developers assume extra responsibilities. It should be about empowering developers to work smarter, by shifting all relevant information left. Developers should have all the insight they need, when they require it, to make better decisions.
Redefining security in mobile networks with clientless SASE
As organizations adapt their IT ecosystems to incorporate IoT devices and expand remote working opportunities allowing employees to use personal mobile devices, enterprise mobility has become indispensable in modern business operations. Nonetheless, this shift presents numerous security challenges and lifecycle management considerations, especially given that mobile devices connecting to networks frequently lack compatibility with traditional security solutions such as Virtual Private Networks (VPNs) or endpoint tools.
Mobile Network Operators (MNOs) and Mobile Virtual Network Operators (MVNOs) are at the forefront of this challenge. These service providers are tasked with the dual responsibility of ensuring optimal connectivity while safeguarding data privacy and user experience. As the market for basic connectivity services becomes increasingly commoditized, these operators are compelled to explore new avenues for revenue through value-added services. Among these, security services stand out as a promising opportunity.
© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.