How technical debt is harming API security [Q&A]

api

APIs allow the easy exchange of information between apps, microservices and containers. They've become an essential part of the way our digital infrastructure operates.

But the very ubiquity of APIs means developers are under pressure to produce them quickly and that can lead to 'technical debt' because corners are cut. We spoke to Tom Hudson, security research tech lead at app vulnerability scanner Detectify to find out more about why APIs are vulnerable in this way and how they can be secured.

BN: Why is technical debt top of mind for security professionals? How does technical debt impact API security?

TH: Technical debt appears anywhere a corner was cut to save time, in order to get functionality out the door. Sometimes that corner cutting means not writing tests, only considering the 'happy path', or not removing old, unused functionality. When developers don't consider anything but the intended inputs and most basic of invalid inputs, vulnerabilities can be introduced. Un-removed old/unused functionality tends not to be maintained or tested. As new vulnerabilities are discovered and new techniques are developed, that old code may turn out to be vulnerable. By definition, APIs are consumed by other systems, and the tendency to not want to break things for those systems is prevalent, meaning that old code sometimes sticks around for longer than it otherwise would, potentially causing security issues.

BN: How do attackers exploit abandoned code from previous versions of an API in order to gain access to information/assets? What types of attacks are most common once they gain access?

TH: As well as regular vulnerabilities like SQL Injection, SSRF etc, often a previous version of an API may have different business rules governing what can and cannot be used as inputs. Business requirements often change between API versions (and are sometimes the reason for a new API version), sometimes resulting in more permissive filtering in older versions. For example, a previous version of the API may allow a larger set of characters to be used in some user-provided data; a developer may later make an erroneous assumption about which characters may appear in said data based on the rules in a newer version of the API, potentially causing that data to be treated in an unsafe way.

APIs are increasingly being used by other APIs too. A public facing API may call internal APIs to get the data required to form a response. User-supplied data may be used in the paths for those internal API requests, and without proper sanitation of that data, path traversals can cause private data to be leaked. For example, a public API may accept a user ID as an input, like '1234' and use that as the part of the request to an internal API, e.g. internalapi.example.net/users/1234. An attacker might pass '1234/../2345' as the user ID. It's possible that part of the code may parse that input as an integer, resulting in '1234' and confirm it to be valid, while still using the original value as part of the path. The end result may be a call to internalapi.example.net/users/1234/../2345, which in turn becomes internalapi.example.net/users/2345 -- incorrectly exposing data for user 2345 to the attacker, user 1234. This kind of attack can easily be used to extract data for all customers. As well as sanitizing inputs, and only using the sanitized versions of those inputs, it would be preferable for the internal API to require the originating context of the request (i.e. which user was making the public API call, so that it can also perform access control checks).

BN: How can developers boost 'code hygiene' to account for these different versions and ensure they are not leaving any gaps?

TH: Static analysis and code linting tools can be very helpful when configured correctly, but also having time dedicated to tackling technical debt can be effective -- provided it's not routinely taken up by developing ‘urgent’ new features instead.

From a more overall organizational point of view, raising security awareness among developers and providing them with the latest vulnerability information in a digestible format, that makes it easy for them to act, is crucial. Security teams need to proactively guide developers and engineers to make informed decisions -- not just monitor their code for flaws. It's all about collaboration. Security is not one person's job, everyone should feel empowered to own web security in their different roles and tasks.

BN: How can organizations assess the amount of technical debt they have across their assets?

TH: A good asset monitoring service can help you to spot the things you didn't know you had. Especially as companies get bigger and have a larger number of autonomous teams, it becomes more and more likely that shadow systems will start to appear (i.e. systems that your central IT or security function is unaware of). An asset monitoring system can use the same techniques used by attackers to discover unknown attack surfaces, unmaintained systems, and dangling DNS records.

BN: What course of action should companies take if they fall victim to an API security attack originating from technical debt? Is it too late for remediation at that point, or can organizations still patch the gap?

TH: It's never too late, but a large accumulation of technical debt can make it feel a bit like trying to untangle the mess of wires you keep stashed in that drawer just in case. Sometimes it's easier to pull it all out and put it back neatly rather than trying to untangle it in place. Root cause analysis is important in figuring out what your course of action should be. If it's established that the root cause of the vulnerability was a known piece of tech debt then the primary approach should be ensuring that time is allocated to remediation of tech debt, perhaps by ensuring at least one item of tech debt is included in every sprint (or whatever the equivalent is for the development methodology you're using). If the root cause was an unknown piece of tech debt, you need to consider allocating time to discovering and tracking tech debt, e.g. using Jira tickets or equivalent. Regardless of cause, buy-in from leadership is essential to show that technical debt can cause real problems and must be addressed.

BN: How can developer and security teams work together to make sure all facets of developments are secure, even after they are no longer being actively used?

TH: The best thing to do is to make sure that systems that are no longer used are decommissioned where possible. Sometimes this will mean the security team monitoring the state of things, and making requests of the development teams to decommission things. The other thing that's highly beneficial is to always work under the assumption that you will be breached, and plan accordingly. This is often referred to as 'defense in depth', ensuring you use encryption for sensitive data, internal APIs enforce proper authentication and access control, etc.

Photo Credit: Panchenko Vladimir/Shutterstock

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.