NYSE outage could have been avoided with code quality software

NYSE

The last three years have provided a catalog of IT horror stories such as RBS which somehow lost over 600,000 payments, the NASDAQ glitch which cost $62 million in fines alone, and now the New York Stock Exchange (NYSE), where trading was halted last week for almost four excruciating hours.

The public is wondering how these software glitches still happen despite the millions spent to upgrade corporate IT systems. These wholesale technical upgrades have not prevented billions being lost in the global economy from what are generically described "technical faults". So what’s the real problem?

"The fact is the exchanges, as well as most other financial services institutions, experience glitches routinely and some even daily. Most of these software glitches are contained quickly by the infrastructure operations teams and do not affect revenue-generating activity. But, once in a while, like when they corrupt data, software glitches become visible to the public eye as millions are lost per hour because businesses cannot find or prevent the root causes of such glitches", said Lev Lesokhin, head of Strategy & Analytics, CAST.

Sounds simple. However, the world is looking elsewhere for the answer. Thanks to international espionage and high-profile breaches, the media and others are focused on cyber security and investing resources to protect software from potential external attacks. We suggest, however, that the more immediate threat to IT infrastructures is bad code being bolted onto heritage systems.

Expansion of business online and new software development methods have meant millions of lines of new code are written every day. The question is, who analyzes the quality of all this code? Developers and programmers struggle with their agile development methods to ensure that the structural quality of legacy systems can still cope with modern pressures of the digital marketplace.

The key to maintaining software quality may not garner the headlines that data breaches do, but it’s no less important. Defensive code and a set of checkpoints must be present within a software quality framework from the very start. Once an application goes live in production, quality issues can easily be missed due to the complexity of installing monitoring tools until it’s too late, which may explain the technical faults and glitches suffered by NASDAQ, the NYSE, United Airlines, and many more. Explain, but not excuse.

This means investment on upgrading software and preventing cyber security crimes could be potentially wasted if the original system doesn’t pass the quality and flexibility test. Newly upgraded systems may work for several months, perhaps a year, but eventually a glitch injected in development will occur, which will stop trades from happening and shut down the system in its tracks.

Focusing on structural quality analysis as part of a quality process relieves pressure on IT teams from working all day to keep the lights on, allowing them to concentrate on revenue generating IT projects. If a Financial Services organization’s code is not robust enough to deal with the digital demands of the 21st century, a system-wide snafu is inevitable... one that costs the company not only in loss of immediate revenue, but also loss of long-term reputation in the marketplace.

It is time to go back to basics or businesses will face Groundhog Day again. To prevent being named and shamed in the media in the years to come, CIOs and CTOs need to invest in code quality software as these so-called incidents are understandable but not acceptable.

Dr. Bill Curtis is chief scientist at CAST.

Published under license from ITProPortal.com, a Net Communities Ltd Publication. All rights reserved.

Photo credit: Rambleon/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.