Note : This is a cross-post from the Corelan Team blog.
IntroductionMarch 21th I was in Paris for the annual Honeynet Workshop. For the first time this year there was a conference day accessible to the general public. Moreover, I didn't have to pay the registration fee since I successfully completed one of the Honeynet Forensics challenges. The day was split in 4 sessions and had talks covering the Honeynet projects, malware, and ethical and legal considerations of tracking botnets and eventual take-downs.
There was also a CTF taking place during the day so I didn't take as much notes as I wanted, this is also why I will not be covering all the talks in this article.
All the slides are available here : http://www.honeynet.org/node/626
R&D in Honeynet Project by David WatsonThe first talk presented some of the current Honeynet projects. Through the years the Honeynet Project has been a major player in the domain of botnet tracking with the release of numerous open-source honeypots and articles on the subject.
Hopefully, projects are still very active in part with the help of the Google Summer of Code for which the Honeynet Project is a mentoring organization. By the way, if you are a full-time university student and would like to be paid to work on some kickass open-source software, the Honeynet project was selected again this year and the application period starts March 28th.
As a quick reminder, an important concept with honeypots is the distinction between high-interaction and low-interaction honeypots.
Low-interaction means that the honeypot is not relying on the original system but is emulating it. High-interaction honeypots usually are implemented as addons, for example through a kernel module, that tracks the internal changes to the system.
Both approaches have their advantages, low-interaction is usually safer since it is emulating the system being attacked and is thus not vulnerable to the flaws in that system. It usually scales better since it is emulating only the parts needed and thus requires less resources, as opposed to high-interaction
honeypots that often require a complete virtual machine.
On the other side, high-interaction honeypots are better at discovering unknown flaws (0days). Depending on the complexity of the target system, the implementation of a high-interaction honeypot might take less time than writing an emulation stack for it.
The first project presented by David was Dionanea, a low-interaction honeypot that aims to replace Nepenthes which is a popular Honeynet software. The fact that is it using Python makes it easier to extend than Nepenthes which was written in C++. It is integrating libemu for automated shellcode detection. Also, it has a SQL interface which make it easier to query the results as opposed to parsing the log files.
The second project David talked about is Sebek. It is a high-interaction honeypot which integrates in the kernel of Windows. It currently uses SSDT hooking for tracing which is a technique used by rootkits (proof that techniques and knowledge is not malicious by itself).
David mentioned they want to change the hooking to inline kernel modifications to make it stealthier. The replacement version of this project is called Qebek it uses QEMU and relies on breakpoint to monitor events, making it possible for example to see the keystrokes on the system as they happen. I don't know if the authors of this software are aware of the fact that the project name sounds a lot like Québec which is the state where I come from (and also the name of a project which you will learn about in the upcoming weeks/months, stay tuned!).
Finally, Glastopf is a web honeypot that emulates a web server and is useful to detect vulnerabilities like RFI, LFI, SQL injection. The author of the project Lukas Rist did a little live demonstration of his tool running on one of his webservers and we could see attacks coming in every few seconds.
As you can see there are a lot of great honeypots being developed by the Honeynet project, make sure you have a look at them.
Efficient Analysis of Malicious Bytecode Linespeed Shellcode Detection and Fast Sandboxing by Georg 'oxff' WicherskiIn this talk, Georg presented a shellcode detection library he designed and explained some its inner working. He started with a quick overview of what shellcodes are and how they are made position-independent via a GetPC sequence.
Apart from the traditional call-pop sequence which is the standard one, he also mentioned the use of floating point instructions, namely fnop and fnstenv to get the current address, a technique I wasn't aware of.
Georg then explained the differences between two current approaches for shellcode detection, namely statistical methods and pattern matching. Statistical methods rely on the likelihood of a sequence of instructions to exist in or outside shellcode, kind of like bayesian filters work to detect spam. This method requires training and is also false negative and false positive prone.
For these reasons, Georg preferred to implement a method based on GetPC sequence identification and then emulation of the instructions preceding the GetPC sequence to remove false positives.
Georg implemented this in a library named libscizzle. It uses libemu for emulation. Since one of the project goals was performance, It also uses sandboxed hardware execution to make it faster.
Georg mentioned that he successfully used this library in CTFs (Defcon, RuCTFe). The library is available for download here in the form of a pre-compiled shared object (Unix DLL equivalent) some header files and a little test application, the source code is not available.
High Performance Packet Sniffing by Tillmann WernerIn this talk Tillman explained the design and the need for two tools he wrote : multicap and streams.
multicap is a tool to do high-performance packet sniffing to avoid dropped packets. To increase performance, Tillman used a ring-buffer to reduce memory allocations. He also used the PF_PACKET socket which has the advantage of already including the timestamp in the packet, removing the need to call the localtime() function for every packet. Finally multicap uses memory-mapped files to dump the packets which should increase performance. Tillman did a quick demo of his tool. A comparison of the performance with existing tools like tcpdump and dumpcap would have been nice.
The second tool is streams. It does TCP stream reassembly for a packet trace (pcap file), in a similar way to the "Reassemble TCP Stream" feature of Wireshark. multicap is interactive and makes it possible to filter or search the streams.
Both tools are open-source and available here :
Basics of Honeyclients by Angelo Dell'Aera and Christian SeifertThis talk was dealing with two complementary subjects : the rise of client-side attacks and the tools developed by the Honeynet project to detect those attacks. As I already talked a bit about PhoneyC and Capture-HPC in the first section of this article, I will focus mostly on the second part of the talk.
Since a couple of years already there is a shift in attacks to client-side applications (browsers, Flash, Adobe Reader, Java etc.). Keeping client applications and all associated plugins up to date is a challenge for a lot of users and entreprises and as Christian mentioned, client applications are driven by end-users which remain the weakest-link of the security chain.
The talk then explained how cyber-criminals are using the web to distribute malware via Malware Distribution Networks. Christian presented a diagram taken from Microsoft Security Intelligence Threat report which I found really interesting.
Source: Microsoft Security Intelligence Threat Report (http://www.microsoft.com/sir)
The attacks generally use multiple layers of servers.
The first one consists of compromised web servers (often via unpatched vulnerabilities in popular applications) which links to another server, most of the time via injected iframes. That second server, known as the redirector, will embed or redirect to another server which contains an exploit kit. If one of the exploit succeeds, it will download and install some malware from yet another server.
Generally a lot of infected sites point to the same exploit server, the quantity of traffic diverted to them determines their effectiveness. Having multiple legitimate servers linking to a redirector also increase it's ranking in search engines and can be further increased via SEO campaigns.
Spy VS Spy : Countering SpyEye with SpyEye by Lance JamesThe last talk of the day dealt with SpyEye, a botnet kit which generated a lot of buzz lately since it is supposedly merging with ZeuS.
SpyEye is a kit cyber-criminals can buy for around 1000 to 3000 US$. It is customizable and comes with modules to steal credit card numbers and credentials via formgrabbing in browsers, harvesting of credentials for FTP, POP etc. ... in summary it's pretty nasty. It also comes with a web panel where crooks can see the bots they are controlling and the information they gathered.
Lance then explained that in the current version, a lot of files on the C&C server are world-readable via the AJAX interface, including debug logs, configuration files and SQL backups. When connecting via the web panel a password is requested, an although Lance had the password from the SQL backup it would be illegal for him to connect in the USA. However, it is possible to connect a local SpyEye instance to a remote server (proxy mode) with no authentication whatsoever. Another advantage of this technique is that the botnet information is updated in the web panel in real-time. Pretty neat :)
Lance also presented statistics regarding the botnet he tracked. It was discovered in October 2010 and infected 28,590 unique computers. When you consider the quantity of information that was probably stolen during such a short period of time and the potential economical gain, it is not hard to understand why cybercrime is so popular.
The question of laws and ethics also came in this talk. Lance repeated numerous time that we are at a point where "Defense is dead" and we need to gain visibility. There is an increase in aggressive attacks on big companies, government and even security firms (think HBGary). The threat is growing exponentially and diversifying into politically oriented stuff. Other attendees joined the discussion and there was evident frustration and discontent with the fact that researchers need to combat adversaries that have no respect of the laws and ethic principles and stay for most of them out of reach of the legal system, while the researchers need to subject themselves to high standard of ethics (especially with regards to privacy) and evaluate their every moves to make sure they are not putting themselves in legal trouble.
I really had a good time attending the Honeynet Workshop, it was great to have a glimpse of the Honeynet Project from the inside.