I know it has almost been a year since I posted anything in this thing, but with COVID pandemic and also starting a job were I am now in IR (Incident Response). I spent this time trying to learn new processes,procedures, and tools used at $employer. I tried to tell myself when I learned something to post about it, but it was just followed up with something else. So instead of that I figured I’d reflect on the almost year now in my current role and with a “if I could do it again” what do I wish I spent some extra time learning. While of course having a good understanding of the broad levels of Incident response (like having a base understanding of the steps of NIST (NIST 800-61) and some basic understanding of what to look for in logs/SIEM alerts was helpful there were a few other things I needed to work on to build up better base, which I’ll list below.
- Logs-Don’t focus on a specific log source, be knowledgeable with a few different logs sources and how to read them. I know at the old gig I had more Firewall logs than I could shake a stick at so I wasn’t really sure how to read/fully interpret some of the other logs. When I first started here my auto pilot response was to look at the firewalls when I was going after my first “incident”. Boy was that a mistake. Not all firewall logs are created equally (some could be stateful versus next gen). Some situations these could be useful but for me a better base was understanding Proxy, VPN, Windows(Authentication, DNS and of course events), EDR to better tell the story. Which your SIEM might be able to do , but depending on the amount of logs hitting it and events/alerts you might be better off looking at log aggregation (like ELK) to really filter and get into the nitty gritty of what you are looking for.
- Linux/SQL/ELK Command line/searching-While I could do basic navigation on the command line, I couldn’t grep/less and use switches for the life of me. Still learning and getting better with switches and some piping, this can also be sql from the command line as well. I use grep a lot when looking at endpoint data and log files from linux host so not being afraid to try different switches/pipes and using grep/less to search for certain things along with understanding how to search for data in telemetry that was an sql database (and can be useful also for things that use osquery…looking at you open SOC CTF). Also I have some basic stuff with Elk and command line now, still need to work on regex for even better filtering/searches to get rid of some of the noise.
- Net flows/PCAPS– Grouping these two together as I generally grab PCAPs from specific net flows. While net flows were an issue at old gig, we use flows a lot with NSM, being able to read the data that the flows are giving you along with deciding if it is worth pulling the pcap from the NSM sensor to get information about an incident. I had done work with pcaps before from a network troubleshooting standpoint and light IR but taking advantage of looking at the TCP/HTTP streams to see what is really going on with the traffic versus what proxy/firewall’s are saying in terms of the communications. Also being able to look at the HTTP headers to see what might be going on and also knowing how to pull data from the HTTP streams (useful to pull downloads that might of came in from IoC’s or C2’s for further analysis/sandboxing).
- Report Writing/timelines-While everyone can write notes and other things (if not, you should really be keeping your own notes outside of Incident tracking/SOAR/ticketing system’s incase those are unavailable in a critical incident. You can also quickly look back at them for repeat offenders for what to look for). However, being able to write something that is more in-depth and technical versus an executive summary is something that takes work. Summary’s aren’t going to need the same level of depth as a full technical report. Also keeping notes on the timeline of activity is also good to have in order to help build timelines (since they can help with the summaries as well).
- Understanding dynamic analysis outputs from sandbox tools– Was tempted to call this system internals but figured this made more since as I know about LOLBINS and .net assembly getting dumped into memory but understanding the connections of what is going on and the associated actions and being able to explain that in both technical/non-technical way (again, the writing thing). Along with being able to tie that information back into the the endpoint data/and searches to help find artifacts and some of the technical happenings on the host.
- Scripting– This is still a weakness of mine (mostly regex) and something I’m still working on as it is a “use it or lose it” skill. Since I’m not scripting a lot of the time my limited abilities go right out the window a lot of the time. Still working on my python and Powershell to a lesser extent to help get some of the easy info I need for stuff to get pulled.
There are just a few of my key things I’ve noticed with myself. Above all else though is not being afraid to make mistakes. I know with IR work you want to do it right 100% of the time and want to keep the $org out of the news, but if you are afraid to try things you will not learn anything and become better. Same goes for trying tools/technics find what works for you and is in scope with your org and just do your job to the best of your abilities. Also do not be afraid to network and ask question’s inside/outside your org.
These are my keypoints, I’m sure there are others who use different tools/technics/procedures but this is what I’ve really learned over the past year. I know I still have more to go, but that goes with the IR territory. Keep learning, Keeping trying to find the 5 “w”‘s with what you know or are trying to learn. Ok so this actually the most important-use your PTO/Take mental health days when needed. Your no good to your org if you are always burnt-out and depressed so take care of yourself!