← All talks

Enter The Ninja

BSides Warsaw · 201847:56802 viewsPublished 2018-10Watch on YouTube ↗
Speakers
Tags
Show transcript [en]

Good evening, everyone. Or something like that. It's almost 4 p.m. so it's almost evening. I got the pleasure of closing this beautiful day with a somewhat strange lecture about what irritates me in the current state of affairs plus I'll tell you something about how, what are The assumptions and how the Binary Ninja product works are all related to malware analysis. Or any other things that require a background analysis. Two words about me. I've been doing things for a long time. I used to do things in Poland, now I do things for Russians, because they pay better. Besides, I also do it for myself and for others. If I wanted to get employed, am I available? I have some projects, some

websites, some things around. You can read them. These slides will probably be available in 15 minutes, because I had to upload them to open them now. You always tell me that I have stupid questions every year, so traditionally I have to be satisfied. I have a blog, I have questions, so let's start with questions. To determine some form of understanding between us, Has anyone used full Disassembler? Laida, Binary Ninja, Hopper... I'd say R2, but I'm a bit sick at the moment. So there are a few people who used it, quite cool. Who used Disassembler as a library? I'm talking about Capstone, Leap Opcodes, Leap 1, 2, 3... Oh my, today's a good crowd. I haven't had that many answers in

a while. Who wrote their own Disassembler? It's worse here, but one person. There should be a second one, but it went wrong. Who wrote their own disassembling engine? It's a bit difficult to say what I understand about the definition of a disassembling engine later. So it's not bad, it's not tragic, we'll talk about it, it'll be fine. I received a lot of feedback that I talk about things very quickly and I don't introduce the theory I'm talking about, so let's start with a slow introduction to the topic. I will quote Wikipedia, because it is a very good source of information. So, according to Wikipedia, disassembler is a computer program that translates machine language or byte

code into assembler language. It is a mnemonic form. In fact, it is right. It is a very good definition, everything is fine. Now, the disassembling engine is my own definition. It's what I understand as a program, a library responsible for transferring the code. It can be a bytecode, it can be a machine code, it can be - a common form - into logical sub-units such as functions or basic blocks. So something that takes a binary, it doesn't have to be a binary, it can also be a block of code made, and creates a logical structure from it that we can operate on. So it actually creates a control flow graph, which I'll tell you about in a

moment. This is a very important definition, which is very complicated, and which will be repeated and repeated a year ago. I will be covering most of my presentations in the near future, because it is something that has been my interest for a long time. And now it's Wikipedia. It's easy to distinguish my text from Wikipedia. I will read: "The basic block is a string of instructions that has only one input and only one output. The output is located at the beginning and the output at the end." Logical. The basic block instructions are made in all order they are written. Without direct execution, the first must be executed, the second, the third, etc. etc. Until the end. Blah blah blah, I don't want to read it. It's quite

obvious, right? Basic block is a string of instructions that is always executed and that's it. What is the problem with this definition? I will show you a little later. And how do these definitions understand certain other tools that we use every day as people who analyze any binary code. The graph of control flow. Wikipedia again. "Program control flow graph". That's not what it should be. "Program control flow graph" is a graph that tells us how the program will behave. So at some point we might have a branch of execution because there are some conditional instructions. The definition of Wikipedia wasn't so bad. It was also known that it was written by a professor from a university I haven't written or analyzed anything for a long

time, but it will be fixed on the slides later. Having a certain idea of what we will talk about and what certain definitions look like, I will present you a list of disassemblers that are popular. As it has been said, perhaps clearly, perhaps not, there are two differences. There are disassemblers, i.e. simple programs that translate machine code or bytecode for a readable form for people. There are also more advanced programs, i.e. disassembling engines, which create and analyze this code in a certain way to generate a little more than pure deadlisting. Above all, there are some disassemblers that are available. Some products have their own disassemblers, some use open or less or more accessible libraries. At this point, the most

popular ones that are available on the market are of course AIDA Pro, which has its own assembly engine. Besides the fact that it is a completely expanded program for other things. There is LipCodes from GNU. All these tools that were built by a certain handsome man, called GDB for example, are based on this. Handsome man is Stelman, if you didn't know. It's an old library that supports many architectures, mainly because Linux works on many architectures, so they had to have things that support many architectures. The newest product, which is already 6 years old, is Capstone Engine, which revolutionized the access to the simulation list. I'm saying this with both a sense of humor and respect. On the one hand, the guy did a really good job, by creating such

tools and sharing the world. On the other hand, the code is really shitty. And the library is of very poor quality. If someone has been using it for a long time, then not always. Because it's not a good thing. The guy who shared it was at Confidensia a long time ago and showed some products he made for Kosoink. It was very sad for me, because at some point the power went out when he spoke and he was very surprised by his life. But he's a nice person and does what he thinks is right. The problem with Capstan is that it's written in C, it's developed and managed in a strange way, but it's a revolution for the

availability of this kind of tools, because it's packed in a specific way, it's not a libopcode with a slightly tragic interface, I'm not saying that Capstone is good, but it's not tragic. We can use it in a civilized way, it gives us fairly good results, and it has a lot of bindings for different languages we want to use. We can use it in Python, in RABIM, in Rust, in Haskell, in ML. Choose a language, probably a binding. It's considered a wonder of the world and the basis of many projects related to S. Amplatius. These are the three most popular and most accessible disassemblers that support many architectures. If we want to make a project that will support not only i86 but also Arma, PPC, MISPA, whatever

we want, we can use one of these. It will be easier for us. There are also specific architectures for data. Why not? I mainly use x86 and 64-bit version, but it doesn't really matter. So I'll just mention those. Of course, there are also... I don't know if there are... I think there are also special assemblers for other architectures, probably for ARM, because I know that there are tools that only support ARM, and there are probably assemblers for that too. An example... The leading product in this field is Intel XSET, which is a disassembler that develops Intel, which is the creator of x86. So, do you think that we should be right in this case? Let's say that we are. There is ZDIS, which is the

best disassembler for x86, in my opinion, because of its... There is Zidiss, who is the best... Someone is clearly not agreeing with me, maybe I have my own opinion. It's hard. He is the best dissembler. I will tell you who hacks and who doesn't like Zidiss. I will transfer it to someone else, maybe it will be better. Oh, there is Zidiss, the best dissembler? Thank you. Yes, it is the best and most sensible app and it disassembles the most correctly. Because, what is Aida, Leapfrog, Leapscapstone, if someone looks at the issues in Capstone, the first two pages will be full of issues. "This instruction doesn't disassemble correctly." "I don't give a fuck. Please send a pull request.

I will fix that." No, okay, why not. So, how much do I think is the most correct and most accessible to use for x86 and its peers? There are things that are more from history, Adam will know because it's his time. UDIS-86, Distorm-64, BI Engine, those are things that were created in the early 2000s, around that time. They are used in old, hack-like projects, in crack scenes and so on. Good things always have their drawbacks, of course, because you were writing a different code at the time, with different goals. There are a lot of old disassemblers from the demo scene and the crack scene, but I won't talk about them, because this is the dark age of our world.

But they are really good disassemblers, because they were written for specific goals being used, usually in viruses, of course, I don't write viruses, because it's a thing, but once it was quite a pleasant thing and you could learn a lot from it. And these assemblers were used, their purpose was to be small and very accurate, so a lot of information that is contained there is very useful. For example, one of the few, I don't remember the name exactly, but if it was Paweł Srokosz, he would probably remember, but I don't think he has it. In general, he put his heart out. So, some time ago, my subordinates had a task to write a disassembler that generated information about which part of the byte record corresponds to which

part of the instruction. So, one of the basic tools, a disassembler that was created during the writing of viruses. This list is long, you can go to Wikipedia and read it, I won't change everything because it doesn't make sense. So, let's move on. A few disassembling engines. As I said, MyDapro is a full-fledged tool, it's also a cloud-based tool. There's Radar2, which has its own analysis module, which I'm trying to do. There's Binary Ninja, which I'll talk about later. Because, in spite of appearances, this token doesn't have anything in common with Binary Ninja, despite its name. But I've tried to think of something, because I still think it's a cool tool. There are a few projects like Realize. Realize was created as, I think two years ago, I remember,

as something that was supposed to change the world, as an alternative for ID. As far as I know, version 1.0.2 came out and ended up on this project. Now version 1.3 came out, so you can buy a new one soon. There is Hopper, which is a really great thing if someone is interested in Object-C. There are projects that are more or less scientific. There is SMDA, which came out quite recently, published by a friend of mine. There is Nucleus, which is a really great work on how to create such engines. The code itself is readable and easy to understand. It's worth reading to know how it works. I recommend it. Even though it's based on Capstone. And a conclusion to this

whole story is that everything sucks and I had to write my own engine. So I'll release it someday. I don't know when, but if I do, I'll release it. It's bad. Generally it's bad, apart from IDO. But AIDA is expensive. It's almost useless, apart from the fact that you can turn it on, analyze it, close it and it's cool. My requirements are a bit more advanced. I need to have something that works on my server. I get a sample, it says "boom, boom, boom". Do an analysis on this sample and send me the results. I don't know if any of you ever bought AIDA. One person, two people, three people. It's not a cheap product. The basic license

costs about 3,500 euros. for one architecture. To do that, you have to buy a separate compiler, which costs another 3,500. And so on, every year. That's more or less a lot of money. So AIDA is not something that scales for production if we wanted to process it in a small lab like mine. So... Well, you have to think about alternatives. I have some requirements for this tool to be usable. First, let's start with disassembly. The design process has to be correct. So if someone tells me that a given instruction is decoding badly, Well, fuck it. How am I supposed to use it if it's not correct? It has to be expandable, right? Because there are some extensions to the assembler languages that

are used, for example, by virtual machines. Mako, a request: limit the number of "ch" letters. - By "ch"? - Yes, by "ch". - You know... - Fuck! Fuck, it's better, good. Okay, I'll try to be more... nice in my presence. I've been trying for 20 years, but it never worked out. Maybe one day it will. Anyway... Some tools, especially virtual machines, use instructions that are not part of the original ISA as their own backdoors. This includes Virtual PC and VMware. They have instructions that are made, and then the hypervisor says: "Okay, this is a hypercall, let's do something else." If we don't disassemble it, it's not a problem if we analyze a simple software. But

if we analyze malware that uses it to hide if you are under the influence of a hypervisor, if we don't disassemble it, we're going to have a problem. Okay, there's no problem if the disassembler doesn't support it, but let me write it down because I need it. So, the scalability is very important at this point. The third thing is the semantics of the instruction. I'd really like to know what this instruction does. A given instruction causes the address to change. How will it change? And what will be at the end? It seems to me that I have understood. It is fascinating that not many assemblers give this information. An addition could be that it shows you the exact distribution of bits per instruction. What

part of the instruction? affects a given bit. What is it useful for? To create a signature automatically. Because if we say that we have a string of instructions, A, B, C, D, which are taken by some operators, and we assume that some things can change, because some operator can change, because the compiler will come up with something else, but the instruction will remain the same, for example, then we would like to be able to say: "OK, then hide these bits, because they don't interest me, but these must be the same, right?" At this point, we need to know exactly which bits are responsible for to have a fully accurate signature. Of course, it can be extrapolated in some way, and it's usually done that way, because there are

not many of these bytes in the repository. What do we need from the engine? Scriptability, because we want to use it in the headless version, we don't want to look at it, we don't want to run the code, we want to have it working. Of course, good API for this, because who would want to struggle with a weak API? The same thing, correctness, scalability, because It may be that they didn't expect anything, maybe they're not interested in adding it, or they don't have time for it, we know how to add it, we want to do it ourselves, we need to have this possibility. And that's the lack of GUI, which may be controversial, but all these tools that are a decent framework, that are based

on GUI, are interactive. Great, they must be interactive for a person to work on it, but they're not useful for automation. Hard. So, as I said, there is no good solution. But there is a good solution, and I will talk about it in a moment. It's called Binary Ninja. And I think it's a really good product. I don't often say that something is a good product, so it means something. It's a product that evolved from the tools of a certain internal CTF team. I've been dealing with CTFs for quite a long time, I'm very connected to it. So I'm able to appreciate that a given team has spent a lot of time and... it's hard to say

it in Polish... created a tool that is usable for them. We also have this, we need tools, let's do something. In two weeks I have 15 minutes off, so I'll start writing code. Okay, so we're going to CTF in three weeks, we need this for CTF. Okay, so we'll be in the plane for 8 hours, we'll be clapping. Oh, I got fucked up in the plane. "Okay, so we have 24 hours before CTF, so we sit and write." And you know how it is. How it is to write things a while ago. So I appreciate where it came from, plus I know what are the things that are required by such a tool. Because if we play CTFs, we have certain requirements regarding tools

that are really a bit different than the tools used in the real world. Playing CTFs is not the same as playing Malware, of course. But the level of scripting is still very high, because we want to do a lot of things as fast as possible. So we want to script it as fast as possible and keep it in mind. This slide is completely taken from a presentation of a girl I saw at Recon. But only because it really sums up this product very well. It was released recently, it has really great API. I'll talk about it in a moment. It also supports binary modifications. We can patch in a very easy way. If someone has patched anything in ID, they know it's a

nightmare. It's not an easy thing. You have to patch byte by byte, and there's no byte by byte. There's a very active community about this product. People are translating things they've come up with for other engines. on Binary Ninja, or they develop their own. The main source of this is a group from the US, which grew up from a university CTF team, probably from RIPSEC, but I'm not sure. There's a Slack, where there are a lot of people who exchange information, it really develops rapidly when it comes to the community. What is really amazing and was innovative when it was published, this is the way they approach disassembly. Most disassemblers are like: OK, we load the binary, we have a nice disassembly, and now

we press the magic buttons and we will have a compilation, like Snowman, Hex Race, R2Dec, Red Dec, whatever. And here we have a more iterative approach. We have certain analyses that are necessary to generate a code that resembles C, a pseudocode from the assembler. So they iterate further analyses on another intermediate language, while simultaneously making the intermediate language available to the user. So we start by loading the binary. And here's the first thing: what's the problem with the binary ninja? to the present day, as far as I can remember, because I haven't checked it for a long time, but I'm pretty sure that nothing has changed, because there was no news about it. You can't load memory dump to

Binary Ninja. It must be a real binary. It can't be an ordinary code. You can see it in the form of a clean byte, but you can't disable a code that is just ordinary bytes. And that's a big problem if we don't analyze malware and throw it into memory every now and then to see what malware looks like. But as I said, there are no ideal solutions. We take this code and assemble it to some middle-sized form, because we want to support many architectures. We don't want to write these analyses for each architecture separately, we don't want to write further heuristics. What I find very beautiful in this is that there are almost no heuristics. If someone was dealing with IDO from the inside

or read blogs about it IDO is built on many heuristics. It makes a lot of sense. It's not software that was built by a reverse engineer. It was built by a programmer. And a long time ago, when all the information about how the program analysis looks like Maybe it's not a taboo topic, but it's quite closed and only accessible to academic institutions. None of the people who worked there had anything to do with it. So they built a lot of heuristics, a lot of their own ideas, which turned out to be one to one with what we now know works as a correct analysis, which is really awesome. and shocking, and if anyone sees the old presentations from a few years ago

of ILFAC, it really kicks you off your feet. Those guys 20 years ago, more than 20 now, came up with what I now consider the standard analysis of computer software. Mind-blowing. Adrem. We don't want to do all this for the architecture one by one. For everyone we support, we would like to do it at once. So we have some intermediate language to which we raise our disassembled code. And it's called lifted IL, or lifted intermediate language. We don't have access to it, but we don't need it either, because it's not something that interests us in any way. It's something we have to define, the semantics of the instruction for this language If we want to add support to, for example,

RISC-V, because why not, because this new architecture is popular. We need to know how the lift-t-dial works, it's well-written in the documentation, but then we need to know how the instructions behave and how it works. With this, we can run a number of analyses, transforming it into another intermediate form, it's called level, intermediate language. We do some SSA on this at the beginning, then we do another analysis, we change it to another level, and so on, until we reach the point where we say "no, it's not possible anymore". We don't want to go further, that's enough. Some kind of an example code, I don't know if you can see anything here. It's very bright, I think, right? Well, never

mind. Can I turn on the light or something? Because I can't see anything here. Technology. Okay, let's say you can see something. So there's some code from some binary, and something happens. It doesn't really matter, it's just an example of how it looks. As I said, lifted looks like this. I'm going to skip the slides, because I don't have notes, because macOS doesn't support versions with notes, for my example. I will talk about how low-level looks in a moment. It is a very simple language. There are not many instructions, it is a tree of expressions. It is potentially endless, but we don't care about it because it will never be. Just like with a level presentation.

We try to apply some analyses from this lifted IL. First of all, we want to write down explicit flags. I would like to point out that this is a very specific language, because it has a lot of instructions that are not single instructions. They perform a little more and are dependent on certain intermediate states. So we would like to write down the explicit ones in order to be able to apply further. So this is done during this step. We try to identify any way to access memory, to know that we have a reference to byte, to word, to qword, to whatever word and remove some fucking NOP instruction. It's not just about the NOP instruction, but about semantic NOP, for example, xchange exex is also

removed because nothing changes. Medium level is a new addition, I think from last year, I don't remember exactly how. But it almost reminds me of C, there are explicit types for variable-suitable and variable-global. Plus a few very simple, maybe not simple, but obvious operations like rolling constant, removing dead variables. etc. etc. This is a list that is usually applied when compiling programs. All the operations that are applied, apart from the type creation, almost always result directly from the compilation theory. This is what the compiler does to make sure that our program is smaller, faster, more efficient. This is how it looks in practice. Each of these formats has a SSA form. "Satire single assignment form" - each change

is written only once. This is very useful, because you can easily track which change was written by which. There are a few other things I won't talk about, because it's a matter of the second lecture. But it's a very useful thing for certain analyses. It's basically unnecessary, because it's simply difficult. How to use it? One of the most important things is API. When I was teaching at the University of Wroclaw, I had a task like writing a code that finds string formats in binaries. Potential use of dangerous string formats in binaries. I remember that my code in AIDA Python took about 100-200 lines. Mainly because I had to write a generic way of obtaining arguments. Because the analysis is done explicitly and

we have access to this information, we can use it to make it available. We can just download the parameter of a given call on a given address. This, of course, requires the application of certain information about the moments of calling, but every database has such a database. This is an example, it's not too complicated. Ideally, it would be that it doesn't work as I'd like it to. As I said, the world is not cool, it's really weak. It turns out that nothing works as it should. Ideally, it would be that... This is a very simple format. It only checks if the parameter is constant. And is it in the external memory? Memory in the segment that is recordable or not? If

it is in the unrecordable segment, then we are safe, nothing will happen, everything is fine. If it is not in the unrecordable segment, then we have a problem. But maybe we would like to expand it. I wanted to expand it at the beginning, as I wrote it today in the train, to have the possibility of detecting whether because there is also a certain use case, which in order to extract information about the change in the use case, you have to run an advanced use case. So, I would like to say that at a given moment, on the use case, if... I'll come back. Once again. The address of the argument that interests us, i.e. the

string format, is on the use case. This is the first thing. So it is saveable, and maybe it can be modified. But if we have a situation where it's concatenated from some std or something like that, I had a code to show you, but my laptop doesn't work properly today, so I won't show it. So we could try to say that there's nothing else that would interest us on a given stack. There's a constant value to a given zero that says "we're not going any further, so we're safe". Ideally, we would iterate after every byte of the stack at a given position, saying "okay, maybe If it happens all the time, nothing will happen. It

turns out that it doesn't work, so we can say that maybe there is a problem, but not necessarily for sure. But it's always a problem with static analysis. You can never say it exactly, because it's usually an unnoticed problem. So we can somehow get the results closer. Another example, unless it didn't work out. Some malware on KIO, where we can also get arguments without any problem. Here we have a little more code. We are looking for some functions with the help of measurements, any patterns. And we are doing some shit. A little bit more. Binary Ninja, there is some slack, some documentation, a very good blog. In a way, partners of this whole mess, but unofficial. It is a personal company, but

they do a lot of things related to it. And some random links from the Internet. I recommend this one for sure. This one is not really cool. The presentation was cool, but the good text was at the beginning, as you can see. And other things. Now we'll go a bit further. I have, like someone was a year ago at my presentation, I heard that she had a bad opinion, but whatever. It was good. I don't know, it doesn't matter. It's important that it was something that interested me, I kept going and I did an experiment. I wrote down the code I was talking about, I made it in the ID. I started with the fact that AIDA has very weak APIs, so I wrote some

abstractions for it. The problem is that I have it in AIDA, it works, everything is fine, but now I would like to use it on some scale, have it in the malware center, which processes things and then says: "OK, this is the same binary, it is not the same binary". So, as I said, AIDA is not suitable for this, because it is expensive and it is difficult to implement it on the server. So, some simple code, it does very simple things, really. What we need to know here is: have A... correct form of the graph, so to say, for example, how many entry and exit edges does the basic block have, some topographic order I explained a year ago, I don't

remember it, some property of the basic block per se, so that I can say that the basic block is really the basic block, so some hash of the mnemonics, some information about about basic block, let's say, and the information is quite simple. It's the amount of data references, the amount of calls, the amount of notes to be remembered, the amount of notes to be remembered. Really very basic things that should be available in any possible way. So, as I said, I wrote some code to combine all of this, because I wouldn't want to write it all over and over again. The code is called Reglu, because it's a glue of things. I will release it sometime.

It has some kind of interface, more or less. It talks about things... It abstracts access to some standard... I was going to say abstracts, I know. It abstracts access to the service of some basic elements that our binary consists of. And as I said, it will be released sometime. Someday. You know. Oh, it didn't work as it should. It should be a nice tweet. I'm trying to refresh it. I can't. Okay. So we started with this. I took some forest code that I had on the disk. I was interested in nekursa back then, because of something. I started this code. We try to determine how the versions of nekursa developed over time. I had, I don't know, a few thousand samples, I guess. I started the code. It

works, it works, it works. Oh, it broke. And it broke down here. Let's see why. It broke down because there is a code like this. When I started, everything worked fine, it was my ground truth. So I said, let's try to do it in production. I put it into Radar. I heard it's a cool thing, apparently. It works, you can use it. I put it in, I tested it, it doesn't work. So I tested it, tested it. This piece of code doesn't symbolize anything. What is this? This is this. Ok, so what? Let's call someone's attention. Twitter. It doesn't work. Really. Fucked up. It doesn't work because it won't work. It's not the fault of the disassembler, it's not the fault of the engine, it's the fault of the

disassembler, it won't work. It won't work. But it works. Radar is really a great thing, it's a highly developed community. And the guy wrote an extension to it on the C level, when DISA uses Capstone under the bottom. I found out during this post. So the guy wrote an extension that the instruction failed, let's see the bytes, if they fit, let's do this. I stole this idea and wrote it in my engine. Good ideas should be used. Most people said that no, won't fix. Second problem. As I said, basic blocks are a complicated topic. It was written that each instruction is executed always and in order. So where is the limit of this basic block?

Someone said that. Once again? On conditions. Why not on the call? Exactly. And here is a problem. As I said, Nucleus is a great paper, there is a great implementation, very instructive in the process of reading this code and paper. But they are completely different from the whole world of RE. are literally based on the definition. Because we really don't know what will happen after the call. The call will transfer us to a completely different place of memory, things are happening there. Will we return to the next instructions? We don't know. All disassemblers, all disassembling engines, such as AIDA, R2, Binary Ninja, Miasm, Hopper, Reanalyze, you can replace them all. Look, I'm limiting myself. They think that the call is part of the basic block.

Since Nucleus is a scientific product, it doesn't measure that. So this was another place to check what was working. I typed Nucleus and it turned out that it wasn't working. Let's move on. x86 is not a completely simple language. It doesn't mean that every instruction is the same. Even though they have the same code. Everyone shows it differently. I'm not talking about different styles, AT-AT and Intel. This is the difference between the old and the new. Microsoft will do everything on its own. Aida shows these opcodes, some show these ones, some show a mix of them. And why is that a problem? Because it's obvious that there are no instructions, because everything is fucked up. If there are different basic blocks, then of course it doesn't work, because there

are completely different values. And that's a problem, because somewhere along the way there was... This is a hash of mnemons. We take each mnemon in a given Basic block, assign it a value and multiply it. This is some information about the mnemons that appear. If there are different mnemons, we have different values, so we're screwed. I'm sorry, I'll take a presentation later. And that was the moment I spent 4 days of my life to find this error. It turned out that my beloved product, Radar 2, has bad analyses and some switches produced completely different variants, completely incompatible. I was looking for a long time, for 4 hours to repeat it. It was on the banners of the course again. There was a

problem with basic blocks when using exceptions in Windows. It turned out that if the proper analysis was turned on, it was completely wrong and said that this basic block ends here and this one starts differently and it all looked wrong. It ended up that I read the code of a product that was constantly using analysis and I used it instead of a simple The simple shortcut that is available, "a" I don't know how much it should be, apparently the more "a" is entered, the better it works. As for me, it's still weak. Binary engine is not ideal, so it will also get a bit. This problem was also related to the SEH. There was no code.

Because SEH's service is not trivial, it requires certain analysis, heuristics to detect that a given a code that serves a specific exception, and is in a given place, not in another. It is registered, because it is not a normal sequence that runs through the program. It is a mechanism of the operating system that causes certain things to happen. So it is fully understandable that the engine data did not support this. But I would like to fix it. I know how to fix it, I can find it. Give me the code, I'll do it. There's no API, nothing. Maybe tomorrow. I'm still waiting, waiting, and I didn't wait long enough. The problem with binary engine is that

it's impossible to say that there's a code in a given place that's not a function. The user can only define a function, not a piece of code. That's the problem if we want to expand the function of basic blocks that normal analysis detects. I could sit here and talk about how things could be done properly. If we say that we support the language, we have API for it, then maybe we have API, not JSON that we just fuck up from one side to the other. And if we have this JSON and we have these functions, they work properly. This is my code for this whole rule that was supposed to generate all the instructions in

the basic block from Radar 2. As you can see, here it is like this: either we succeed, then we take all the instructions we can, because we can write, we can take all the instructions from a given block, and if not, then we take them one by one, one by one, one by one, because we can't write a code that will work and we keep saying that we are awesome, but let's add support 2048, because it is really needed. Thank you for the rant. Some, like Capstone considers things to be... I won't even talk about it, sorry. It's sad, isn't it? Was there something else? No, I said it all. That's it. If you have any questions, it's 2:17, I extended

it by 2 minutes. So it's not good, it's bad. There are tools that help us, but they bother us. B&R is really nice, and it's really cheap, it costs about $200. Compared to 3.5 wheels for Hex Race it's really nice. So I recommend it. And that's it. Thank you very much. Next year we'll tape this beer for you. What are you going to give me? We'll tape this beer for you. You know, product placement. Oh, sorry. I took it because it was the only one in Biedronka. You had to go to Lidl. If it's Saturday, then to Lidl, to Lidl. It's not Saturday yet, so one, and two, I'm not going that far. Okay, alright. Thank you all for today. Thank you for the constructive opinion on the

stream. More or less. Thank you for eloquence. Thank you for limiting the abstractions. Thank you all. Tomorrow we start the next presentations at 9:00. I hope there will be no more technical problems. -