The Deep Tech revolution -- Part 2: Meetups


Welcome back to Zama’s ‘Deep Tech Series’, exploring activities and initiatives that, while seemingly confined to companies and startups heavily technology driven, can potentially be applied to other organizations. Thanks to insights of experts in cryptography, privacy, blockchain and Machine Learning, the aim is to provide useful guidance on how to implement these activities in any kind of tech-driven company and beyond.
In the first installment, we looked at the advantages and added value of releasing white papers, a type of research and data based content that can share information about a product or technology while showcasing the company’s knowledge and expertise. In this follow up, we'll look at the importance of engaging and growing your tech community through meetups.
Four key ingredients to unlock HyperProductivity through workplace IT


Just a few years ago, it was hard to directly link the impact digital tools were having on productivity across a business. But as businesses have transformed and become more digitally mature, that’s all changed.
With metrics for tracking productivity ubiquitous, business leaders want to take it up to the next level. Now they are focused on HyperProductivity -- driving unprecedented levels of productivity across the organization. To date, businesses have tried a range of tactics to foster HyperProductivity, frequently involving discouraging workers from spending time on anything other than core tasks. As an extreme example, some businesses even installed uncomfortable toilets designed to limit time employees spend in the bathroom. Instead, they should be focusing on making it as easy as possible for employees to do their jobs, which is why optimizing workplace technology is a better path to explore.
Selecting the right storage for SQL Server high availability in the cloud


When it comes to the type of storage you might use for a cloud-based SQL Server deployment, all the major cloud providers provide a bewildering array of options. Azure offers Standard HDD as well as Standard SSD, Premium SSD as well as Premium SSD v2. Oh, and then there’s Ultra Disk. And AWS? The options are no less eye-glazing: ST1 and Standard, GP2 and GP3, IO1 and IO2.
Even if you could easily differentiate the offerings, what would be your best choice if you plan to configure your infrastructure for high availability (HA) -- by which we mean an infrastructure designed to ensure that your SQL Server database will be available and operating no less than 99.99 percent of the time?
The promise of generative AI depends on precise regulation


AI’s path to maturity lacks footing without the stamp of regulation. While the recent developments of the White House executive order and the EU AI Act are a good start, there’s a lot of progress to be made in terms of AI research and rule-making. Because if we want to unlock the full range of AI use cases, we’re going to need precise regulation, tailored to the unique needs of each sector.
Generative AI sparks an opportunity to transform every industry; a prospect that has prompted a flurry of AI innovation across verticals that will undoubtedly send ripple effects throughout the economy. McKinsey estimates that across 63 different use cases, generative AI could contribute between $2.6 trillion to $4.4 trillion to the global economy annually. But sector and use-case-specific regulatory framework is imperative if we want to harness this potential and ensure responsible, safe applications of generative AI.
Tech must look to build lean to go green


Technology’s role in tackling the growing climate emergency is recognized as a vital one. Yet, the sector's own detrimental contribution to the issue often goes overlooked.
The ever-evolving nature of tech, with constant changes in usage, equipment, and energy efficiency improvements, poses challenges in tracking its carbon footprint. However, projections suggest a concerning trend: by 2040, the ICT sector could contribute 14 percent of the world's carbon footprint, a significant jump from 1.5 percent in 2007.
What is fat finger error and how to prevent it


Whoever said "To err is human" was right (actually, it was the English poet, Alexander Pope). Just like in our private lives, we all make mistakes in business too, no matter how diligent or professional we are. The trouble is, some human errors, however small, can have disastrous consequences. Like the fat-finger error that can cost an organization millions.
A fat finger error is a keyboard input mistake that results in the wrong information being transmitted. The term originated in financial trading markets and is now used more broadly in the security industry to describe data breaches that are caused by human error, particularly when the breach is attributed to mistyped information, like an email address.
What are the top cybersecurity trends to look out for in 2024?


As 2024 fast approaches, organizations are looking back on the past year to try and gain some insight into what the next 12 months could hold. This past year has been particularly interesting in the world of cyber security, with ransomware and data breaches dominating the headlines, the rise to prominence of AI strengthening cybercrime’s arsenal, and the shift of focus to cyber resilience causing businesses to question what comes next for the industry.
For security professionals across organizations of all sizes we anticipate the following issues will be a key focus for the year ahead:
Kubernetes monitoring: 5 essential strategies for DevOps success


Monitoring Kubernetes clusters is a critical aspect of managing cloud-native applications. Kubernetes, a favored tool among giants like Spotify and Major League Baseball, empowers developers to create and operate at scale. However, the complexity of Kubernetes with its multitude of nodes and containers demands a robust monitoring strategy.
In this article, we share five key practices to enhance your Kubernetes monitoring approach. Let's dive in...
Generative AI is forcing enterprises -- and policymakers -- to rewrite the rules of cybersecurity


Following a year full of excitement and uncertainty, and more opinions about the future of AI than anyone could count, AI providers, enterprises, and policymakers are now rallying around one thing: AI security.
The White House recently followed in the European Union’s footsteps to introduce a new set of standards for the secure development and deployment of AI models. But while regulators triangulate their policies and AI companies work to comply, the real responsibility to proceed safely will remain with enterprises.
How many times are you going to think about ransomware in 2024?


In 2023, we saw the popular trend of asking "how many times a week do you think about the Roman Empire?", and as an avid Roman Empire fan, my answer was a lot. In fact, the fall of the Roman Empire can be easily compared to ransomware breaches.
In 410 AD, the impenetrable walls of Rome were breached by the Visigoths, signaling an end to the once-mighty empire. The reason for the defeat of the Romans was complacency -- the walls and other defenses were in a state of disrepair, and Rome lacked a substantial military presence.
What to look out for when it comes to cybersecurity regulations in 2024


It’s been another busy year for cybersecurity regulations. We saw a new National Cybersecurity Strategy by the White House in March, and throughout the year, we’ve seen the National Cybersecurity Center (NCSC) launch several new initiatives to increase cyber resilience.
As mentioned by Joseph Carson, Chief Security Scientist & Advisory CISO at Delinea, the landscape of cybersecurity compliance is expected to "evolve significantly, driven by emerging technologies, evolving threat landscapes, and changing regulatory frameworks."
The CISO's next priority isn't technology, it's building a great employee experience


In security, we are very used to talking about features and functions in the tools we use. When it comes to measuring the positive impact of what we spend on cyber, in terms of both people and equipment costs, we tend to be equally abstract -- for years, 'mean time to detection' and 'mean time to resolution' have probably been the two most widely-used metrics for cybersecurity progress, and measuring the number of security incidents handled is still probably how the CISO tracks his team’s contribution to the organization.
But no longer. Today we need to start thinking about measuring cyber’s impact in completely new ways -- or to be more accurate, concepts new to us in IT security but already very familiar to our colleagues in HR; with terms that seem very far from threat intelligence, such as wellbeing, inclusion and creating psychologically safe spaces.
The future of legal roles in an AI-driven world


The advent of artificial intelligence (AI) and large language models (LLMs) marks a significant turning point for the legal sector. Recent studies suggest a dramatic change is on the horizon, with up to 44 percent of tasks in law firms potentially being automated by AI. This impending transformation necessitates a re-evaluation of legal roles, requiring professionals to adapt and collaborate with AI while also preparing for the emergence of new positions.
AI promises to greatly enhance the efficiency and effectiveness of legal services, from streamlining the creation of legal documents, contracts, and agreements to automating repetitive tasks, to ensure accuracy and uniformity. AI's capability extends beyond mere data extraction; it can rapidly summarize complex documents like depositions and complaints and transform text into actionable insights. This will empower legal professionals to better understand and manage legal obligations, which will significantly enhance client services.
Facing a riskier world: Get ahead of cyberattacks, rather than responding after the fact


Today’s complicated threat landscape leaves security teams grappling with new challenges on a scale never seen. Threat actors are more organized and efficient, leveraging a vast ecosystem of tools and services that cater to experts and beginners alike. In early March, the Cybersecurity and Infrastructure Security Agency (CISA) released an advisory warning of the resurgence of Royal ransomware with new compromise and encryption tactics used to target specific industries, including critical infrastructure, healthcare and education.
Cyberattacks are only increasing and growing more destructive, targeting supply chains, third-party software, and operational technology (OT). Gartner predicts that by 2025, threat actors will weaponize OT environments successfully to cause human casualties. This is happening at a time of increased technology adoption led by accelerated digital transformation efforts, hybrid work and the Industrial Internet of Things (IoT) boom, leaving security teams to manage an evolving and growing attack surface and multiplying vulnerabilities.
Why AI panic in 2023 will yield to AI pragmatism in 2024


2023 rapidly became 'The Year of AI Panic' as governments and the press entered into an AI frenzy.
Progress in Generative AI, spearheaded by GPT4’s release in March, offered users incredible tools with a visible utility and practical benefit. Its impact could be felt across their personal and business lives. From that point there has been a buzz around AI with a snowball effect across the media fueled by sudden engagement from the most senior levels of government across the planet. 2023 has seen the AI train fly across our screens, and the pace of developments from a technical, policy and regulatory perspective has been almost impossible to keep up with. So too has the FUD -- fear, uncertainty and doubt that accompanies disruption.
© 1998-2025 BetaNews, Inc. All Rights Reserved. About Us - Privacy Policy - Cookie Policy - Sitemap.