← All talks

API Security Testing Automation: A story of shifting left

BSides Athens25:56122 viewsPublished 2022-06Watch on YouTube ↗
Speakers
Tags
CategoryTechnical
StyleTalk
About this talk
Abstract: This talk is about an SDLC engagement we have and the challenge we face shifting the API security testing earlier in the SDLC process. We tried to keep this as short as we can. Bio: My name is Ignatios and i work as an application security engineer at TwelveSec for almost 7 months. You can find more about us at https://twelvesec.com/ I was a software engineer for 4 years. I am a CSSLP and AWS-CPP credential holder.
Show transcript [en]

hello everyone i'm really excited to be here with you guys reporting here from 12 sec at this year's besides event my name is ignatius and today we want to talk to you about challenges we had in one of our still seeing engagements with one of our partners uh for having to shift the penetration testing which in our case was actually the in an api driven application the api security testing uh earlier in the life cycle of a process of the sdlc process so we want to share with you our experience what ideas we had how we came up with a solution and of course since this is also ongoing for us and we're going to see how it's going

to progress over the next couple months also we want to hear your feedback and get the conversation going on what will you do in your engagements in terms of processes tools and the life so for anyone who doesn't know us 12 sec we are cyber security consultancy film we offer all kinds of consultancy services you name it we do it from red team engagements penetration tests stlc engagements and the like so moving on to the next slide let me give you just a little bit of information so that you know who is talking to you i'm a a software engineer for uh four years uh and i have been application security engineer for seven months since i made

this transition and that's when 12 sec uh since then i have been involved in mainly esthetics engagement as well as some infrastructure configuration knowledge and i am also a css peak dental holder and a cloud practitioner from amazon now starting my journey to the cloud to the cloud security uh certification path uh so uh let me give you like um a little information about what we're going to see in the next slide so that you can have a clear map of what we're going to present to you uh so i will talk a little bit about the engagement and the activities that we're running the requirements that we're having some architectural characteristics of the application at

hand which will be a source of input for us for the solution that we have to make and the factors that we have to consider in order to choose the right processes and tools to do the job effectively uh so this is an stc engagement we are all quite bunch of activities here we're on risk assessment well risk assessment not exactly in the face of an agile sprint like in weak sprint where we run it every time but more like an inception like a report but it always comes handy because you need to go back to this report to refer the general risk overview or policies or classifications in terms of confidentiality degree and availability

of the data that your system processes and it also relates to the prioritization of the cases that you want to implement in a test suit so that you can have the most measurable outcome the most uh the return of investment on the test cases you do in your security automation as well as other activities of course we do requirements analysis this is a sprint-based activity when we solicit we define security requirements of course in addition to the requirements we have and if they are not needed a specific business specific security requirements related to the business cases at hand and of course these requirements will also relate to how we will define our test cases of course with other requirements

we're going to show in the next slide which will be our general application security requirements framework uh then we have here the architects review uh we mostly do actually we exclusively do a svs on this uh we're on ltria for critical applications and we do also thread modeling using an automated tool called tedious risk uh not automated in the sense that you don't have to do anything but automate it uh in a way that it's not like in a word document where you have like hundreds of pages where you have to what to do your threat model and it can get really it can get really messy and huge quite fast but it can offer like a good view of

your risk status and generate controls based on specifications that you provide and this of course is the main the main source for us in regards to the business logic as an attack surface which we want to cover in our security automation soon then we have source code review uh we do this uh manually uh sometimes not so often we have guidelines um specific to the platform where we are testing we're also using a static analysis plus specific set of rules which are signature based of course to identify vulnerabilities in the code as soon as the developer writes the code and we also have composition analysis for the our third-party libraries then moving on after this

after after the normal tests are on we also run and dynamic application security testing tool we runs up to identify common vulnerabilities or web application misconfigurations we do this it has come out handy from time to time though now it's uh where the things are more mature and a lot of security controls have been implemented uh there are not so much there is not so much to to go on from this but this also is the main reason where we want uh to have is that this is does not cover the actual business logic we want to test on this application which has a bunch of functionality areas with many different business flows consisting of multiple

steps uh integration with external system and this this release the attack surface we want to uh to cover moving on we have the penetration testing then if attractor audits infrastructure audits are done are conducted against the cloud-based environments now the penetration testing is the only actual uh in the highest level of security assurance as a verification activity in terms of this attack surface where trying to to have covered earlier in the life cycle uh it offers the best we can i it offers the best assurance as possible but it's not conducted on a spring basis and that's what that's what we're going to tackle in in our initiative to move this a little bit uh earlier in the life

cycle so that we can leverage from this assurance the penetration testing only can provide we also have the requirements we have asvs i mean like we have specific chapters that are applicable to the problem we have at hand and will actually be a mapping from the for the solution we chose and this special like specific chapters in asps like uh secure file uploads api architecture access control and other and others as well uh we also have some regarding some i will have the goal of this of this initiative we took with security automation to reach maturity level 3 which is actually which actually mandates to have integrated security testing as part of your pipeline so

moving on to the architecture we are basically learning a single page this is about a single page application running at nuclear front-end we have a couple rest apis implemented in uh in java in the back-end uh for our authentication architecture we use open ids the standard flows of the open id of the oauth specification um for our authorization architecture we mainly like in the high level we have a role based access control consisting of multiple roles and hundreds of permissions for different functionality areas and we also have facts on level x control implementing authorization checks at the function level trying to mitigate the violations that could occur if we were only taking the role into an account in such cases

and our accountability architecture which is often overlooked we want to highlight it and because it's important from a compliance point of view having to log the right stuff with the right way taking care of the privacy of what you log and of course the sensitivity of the data you log and keeping an audit rail and keeping people accountable for what they do in case of a for ethnic investigation or what not and be able to cover this and provide the information that will be needed in those circumstances so going to the problem at hand uh the first problem we had is regarding our uh sourcing and when i'm saying sort when i say sourcing i basically mean how we're

going to source our test cases i mean how what will be associated for defining those cases so we had a couple options here the first one was to crawl into the application i mean i have this have like a tool like burp intercept the traffic and give all this valuable information such tools can provide and from there based on the information that we get in the endpoint identification begin writing cases this this would be really fast although we didn't like the fact that we have to consider the application behavior and the responses we got as the single source of truth for defining the what was meant to be the correct thing to do which we didn't like because something

might have been implemented wrong now the other the other option was to do a source code review uh we could do like a regular expression on the source code because because we have access to the source code identify all endpoints and from there reading with the code which would take significant resources of course but uh it would be a more white based approach to do this and have like a really good coverage but it would also miss the documentation and the and misses the opportunity to leverage all previous activities done in the life cycle so that we can have them all uh so that we can have like cases directly derived and collaborate collaboratively up decided uh between

security architecture as well as a business analyst from the client side and of course the other option which seemed also more natural to do is the documentation which we already had in a way i mean not like always updated not in a perfect state all the time but the best possible place to really identify the peace and slaughter flows which we are most interested in because this is the attack surface that is most hit by the bad guys we want to keep out is the business logic of the of our applications and we wanted to really uh have this attack surface covered and documentation seemed like the best option so the solution to that was to create an

access control matrix now likes control matrix we didn't have at the time but uh we liked the value it offered us in terms of visibility because we already have such some of this information scattered all around the user stories use cases uh we had like some authorization permissions in uh where where they were applicable in some user stories but now what we have is a single confluent conference page well it could be in other formats as well but most important thing is that we have a single source of truth regarding the authorization schema of the application as a whole that could be also be used as a basis to also engage um developers and architecture team ideally

to a review to keep this on track to have this on the expense on the or the the definition of done or their workflow and have these actions implemented there even from the analysis or from uh or leverage those actions in the test cases as i will show you in a minute on how we do those cross reference so this cross reference is actually having to being able to track what the test cases are and where they come from so that when you go maintain this test suit of security apis you know that this test relates to this functionality and you can check what exactly lies beneath right and this of course has to do with

business loads which is our primary which is our primary focus here and we want to identify those flows and this will help us do it more effectively now the second problem we had is how we want to structure our test of course one very like a natural way to go is to go using the endpoint paths now the endpoint paths i mean it is quite what it sounds like like having the url uh in a subresource way for urls included in an endpoint and then structuring your tests based on that which would achieve a greatest a good amount of coverage i guess but it lacked the security side focused of things because we wanted our structure

to be specified so that it can help the tester or the developer who is writing a test to think about those key areas that we want to have security security wise covered so the other option was to go with specification and of course the solution was to use the os project api security which has insufficient categorization it's community based and it's open source and i mean it pretty much does what we like and of course the thing is that there is no other a non-structured alternative as well but regardless this is what it is and it does what we want and really gives you like the basic uh backbone of mindset to write test cases upon for modern uh api

driven applications right uh so the next problem uh what tool are we going to use here we really care about ci integration capabilities we care about what apis we support and of course we care about the learning curve the developers would have to familiarize in an ideal scenario where they also engage in this actively and make this part of the experience and we also want to make this more easy to them to choose a tool that will be familiar as well then solution is postman postman accomplished exactly that because developers already used network purpose not like security focused testing but they took you to test their apis in some way or another we'll have dedicated documentation

sections which we really like and we highly leverage uh to both refer to our access control metrics so that we can have this tracking that i talked about before but also to include our test data which the specific test case refers to we also have flow support and this is about what we were talking before actually on the uh on the business logic identifications and transaction-based flaws which we highly want to cover and we cover only on the manual penetration tests on big milestones but now we want them covered as soon as possible now postman has a seamless integration with changes not like in form of plugin that can see is the condition continuous integration server we use and we can use

it as a build step have ready to use a docker monster now the other problem we had is how we're going to track our data so for a test suit to run you need a couple data you have test data scripts uh being responsible to create all this data you need for a specific environment we don't have the scripts at this point uh we're in the process of creating them but we had couple options to do a confluence page which would be a nice choice because confluence has been linked in history we also would call like a more sophisticated to tackle issues that would come up in the future if this got big and wouldn't be so easy to scale in

the confluence page but uh the data management solution is not what we could suffice with right now so uh we could go also with an old time classic excel sheet which would be really fast to do and this is what we chose in terms of being straightforward in terms of being fast and of course we know that it won't scale but we have this we have this truck and we can we can start our way low and see how this whole thing progresses and what i want to really note about here is that even though you're using an excel sheet it's always important to keep the documentation documentation guys like really important makes things so much

more maintainable like even a simple sick that i'm going to show you like in a little bit we can you can have like a documentation tab where you document your columns specify some common convention to use and there you got it you have something that can be more easy to write and maintain and assure some common format on this so you don't have discrepancies and inconsistencies so uh thank you all uh for uh for watching this here from 12 sec we wish you all the best and moving on we hope to you found something useful a lot of these to do in your engagements or give you give us some feedback uh regarding what you would like uh what

what you think should be best in terms of approach in terms of processes or tools so moving on to the video peace so hello again everyone moving on to the video unfortunately we had some problems setting up the environment that we have configured jenkins to run our test set test shoot on uh alternatively we're going to run the test suit only one of our dedicated environments for verification purposes that we hold in our premises but still i'm going to be able to show you a lot of the stuff that we discussed on during the conversation uh regarding uh regarding postman and its features and how we leverage all the cross referencing and the like so as you can

see here in the collection we have the collection structured as per the specification of ospressed api security which mainly is security focus and highlights the key areas that we would like to test as part of for security api testing next we have the documentation section where we outline basic stuff regarding how we write and maintain our test cases as well as some information on the references we have on test cases themselves as you can see specifically here we have the source the associated action from which is actually the identifier of the action in the action control matrix as i'm about to show you in a few seconds and the identifier from the test data that are needed to run this specific

test case so as you can see the identifier here it is c68 we can see the respective action here and the url that is going to link to the analysis uh for this specific action outlining the requirements and the all the business logic scenarios that we highly want to test and that way give the tester or developer that's writing the case or maintaining it a very good idea on what this case is about so moving on you can see here the test data this data was simply as we said tracking in the uh in the test here uh just uh having this test data and uh track tracking them and having some documentation on how this uh

how these columns are are used and some common conventions on using in this seat uh moving on to the uh to the postman i want to show you some features that we might come handy for various use cases for example flaws which is a very good feature of postman that can send multiple http requests but also you can do whatever else you can imagine pretty much because it can go like from logical testing to expat evaluations encodings decodings uh parsing of various output that can occur and also another very interesting that we'll find about these delays delays can be used like for example in our case where we have external system integration and we must wait for some for some time

interval before we invoke an endpoint we're going to use delays for this transaction based complex flows uh of and then again we have the environment variables environment variables we can use to share and not use the same the same data all the time in the test cases as we do in our base url environment variable here but also among us i'm about to show you in the integers configuration we're going to use environment variable argument the container of human we're going to set up to pass this variable at the runtime container where this suit is going to run these variables are also can also be defined as secret as you can see here you can select the type this

way we can mask it to prevent any information disclosure one other interesting feature of postman is the attachments uh because you're gonna have like some multi-part requests of course which is going to include uh upload then attachments to test the uh upload endpoints post months through the settings gives us the capability to define a directory now don't mind my own this is an absolute director with my machine but postman gives you the possibility to define a directory where it can be referenced by multiple machines so that you don't have file uh file solutioners uh i will also show you how this the outcome from the actual uh test should running postman is going to look like

and i also want to highlight like specific methods that you can use so that you can have like more clarity in what your test test actually uh tries to verify of course going also to be covered in more depth in the reference to the ax control matrix itself now moving on to the to uh to jenkins as you can see here we have some basic configuration ready or with the newman container ready to use documents we specify the directory we want to have inside uh inside inside the container specify the collection to fetch from a github repo we have in the previous step also this is environment variable passing as i told to you before

so that we can affect the run-time runtime behavior of the collection and also some report outputs that may come handy in various use cases and how you want to process the output from the test cases so uh this is only thank you very much for watching this and please share any feedback you might have or any questions thank you again