Analytical lifting, how Olympic Weightlifting helped with my IR workflows/analysis

Don’t worry I am not going to make this a #dfirfit post with lifts and all of that but wanted to share how my Olympic Weightlifting (which is “technical” in the sense of movements/positioning and mindset) helped strengthen my IR workflows/analysis when I have investigating. Now I am not saying everyone should go lift and they will magically be a better at their job (though it could help with stress and other things, but that is not the point here). Just wanted to share some personal experience and something I noticed after some time off from lifting due to my chronic health issues. Before anyone ask why I lift this way versus just general strength and conditioning it just happens to be a form a movement I enjoy (granted its a love/hate relationship) and I also do enjoy the competition aspect of it as well… Just don’t tell my anxiety that but also at the same time I use it as a tool to work on that

Now why be analytical with this? as someone that is mostly a remote athlete I sort of had to as I do not get direct feed back like I would if I was at my gym lifting (Don’t worry I do record lifts and send them to my coach and do try and make the commute out to the gym when I am feeling good). As I’ve developed a connection with how my body is moving from yoga even before I started lifting so I had a slight advantage per say. You will get to a point when you can feel that your weight is shifted to a pinch foreword or you transitioned from your first pull to your second pull a pinch earlier than normal. A lot of this is just minor variations that happen in a lift and most of the time the faults come from when you first start pulling the bar of the ground and next time you know you have a learning opportunity for your next lift/session. It’s these small little details you need to pay attention to and feel and you work on rep after rep after rep and also doing accessory movements and prehab for imbalances you might discover you have/ares you need some work at.

Ok I bet you might be wondering what the hell any of that means if you don’t lift or how that ties into not even incident response but just general analysis (or to keep it “cyber” a SOC analyst). Well I’ll break it down into nice little bullet points of things that helped me along the way and to help make sense of the things I noted above and take what you want from it. Maybe it might help you realize that hobbies you enjoy could still help you with skills you might need in your “9-5”.

  • First and foremost, You will make mistakes and you will “miss lifts”
    • Tying the miss lifts in you will miss read a log/data set, might think an alert is a false positive, or go down the rabbit hole analyzing trace data/mft and try and follow the “bad thing” only to find that it was just either an IT background script or just the way a peice of software worked and be a false positive after all. This is hard concept as no one likes to be wrong, no one likes to miss when it comes to lifting (or to bomb out at a meet, aka not make any lifts at a meet) or because something was missed a P4/P3 incident turns into a P2/P1. No one is perfect (good luck telling that to use of ADHD/Anxiety…which is a hole other subject). You will make mistakes it is how to take them and if you like them beat you down or take them for what they are, something to learn from and grow. I know back when I was doing engineering the joke always was if you don’t take down prod once, are you even a sys admin/engineer? Well I attempted to patch and update the SIEM at my old job and it was down for a month…talk about a “learning experience”. Now obviously there are situations that can be more dramatic and hit harder (ie ransomware) but feel that shittyness for a bit (because feeling emotions is good as I am learning) but you need to noticed that weakness and make it an area to work on, talk with your manager/leadership about training or going outside of work for mentorship/guidance on it, research it and come back better for it after you get some “reps” in. but for a TL’DR don’t quit because you missed, make it something you can learn from and grow and reconise when you are falling back into bad habbits
  • Successories…aka accessories movements..aka training/research
    • I know I mentioned with the main lifts you are doing rep after rep after rep which gets you strong in those positions and gets you to feel things for your core lifts or core job/skills for job. If there is one thing I have learned from both of these is you can always learn something. Technology is always changing so you need to do what you can (not burn yourself out in the process) but make time if you can for doing accessory’s or trying that can help out with your primary job function even if it is just a different way of doing one of the task. In a weightlifting approach it might be doing clean pulls or snatch deadlifts, it’s not a full snatch or clean but helps us work on those positions we need to get stronger at for core skills. This would also be doing CTFs/Labs with different tools that you might not be used to using. Another example would be hamstring curls, pull ups, farmers carries, or even curls to hit weak spots that will help with the main lifts this might be learning a specific function of a tool better or learning how to write more complex detection’s/queries, or just general researching malware/vulnerabilities that have been affecting $employer. Again the purpose of these are not to burn you out or lead to more fatigued if you are having an off day, pull these back or skip them. Also if you cane get $employeer to help with this, and get leadership support, awesome! I am also not saying to go buy subscriptions to every place too follow the KISS methodology about it and if you noticed an area bring it up and try to work on it when you can. Which also ties into the first point I made about not giving up because you made a mistake or weren’t perfect about it. Before I forget this can also be soft skills too (Lets be real, most of us also need to work on writing reports/documentation…)
  • Communication is key
    • Not just the external communication but also internal…listening to your body and how you are feeling. You might have days were you had some personal/family stuff going on (or in my case chronic health issue) and just are not feeling it, having issues with simple task (which can be frustrating and again lead to point number 1 and why I made it the first point). Talk to your peers/managers/leaders or in a lifting sense your coach. You might need a day or two with pulled back responsibilities or even need to take some mental health/”sick” days to recharge a bit (this also does not make you weak FYI). If you don’t tell anyone what is going on no one will know and can help. You are just going to push though and while it might not lead to a physical injury like with lifting it will likely lead to burnout or just emptying your..umm..f’s to give which leads to mistakes and can start a negative cycle really quick. This also leads to the prehab stuff as if you have something that is nagging its going to make things harder so work on addressing by communication and building a network/support system to help with those nagging issues can get you back on track along with sucesories. Being honest this is still an area I am working to improve on but still wanted to include it
  • Slow is smooth and smooth is fast…positions
    • AKA workflow. Not talking about IR playbooks or anything like that but how you move though your work ie how scope out/triage an alert/event/detection, how your handle endpoint investigations, or log analysis. Some certifications or blogs will tell you a way to do it, much like there are defaults/standard form for lifts but you need to make adjustments as someone that is 6’2″ my start with lifting or position is not going to be the same as someone that is 5’2″ (lets also leave mobility out of its too..) You need to find the methods that work for you and almost make them clock work and get good at feeling for them or the timing in a sense (aka when you need to pivot to another dataset or how you handle investigation). Obviously laws, regulation, or policy might dictate the end results (ie how things need to be noted on evidence/chain of custody, templates for reports) but making the bulk of it work for you and your mind works and likes to “move” or handle things will only aid you and make the bulk of the work flow better and lead to less jumping around and help with interpreting/automating things which can also help if you actually do have the ability to build out automation via playbooks with SOAR, Jupyter notebooks or other functionality like that

I was going to make this a bullet but since it ties into the first bullet with networking/mentorship and communication but getting a coach versus trying to do it all on your own can help with. Also try and have fun…I know some days are going to suck and be hard but try to enjoy it the best you can. This isn’t your life it is just a piece of it. I know there is the push of grind culture and if that works for you cool. But personality I know when I am on my death bed or when my time comes I would rather be remember as someone that did the work and had some passion but was still human and a decent person versus no one being there but hey they caught an alert that stopped APT 29 or only lived to lift..

Take from this what you can I know some of the lifting analogy might not fully work for some but really this has been on my mind for a little while now and felt like writing about it. I’m sure I might need to make some updates down the road but this was a lot more mentally fatiguing than I thought it would be. Maybe I should of gotten on my soap box and ranted and saved this for a rainy day, but I found it interesting how hobbies and other things helped my in my career development and might help others tie in hobbies and other activities and see how the lessons learned can help build in other areas.

IR, what I wish I knew a year ago

I know it has almost been a year since I posted anything in this thing, but with COVID pandemic and also starting a job were I am now in IR (Incident Response). I spent this time trying to learn new processes,procedures, and tools used at $employer. I tried to tell myself when I learned something to post about it, but it was just followed up with something else. So instead of that I figured I’d reflect on the almost year now in my current role and with a “if I could do it again” what do I wish I spent some extra time learning. While of course having a good understanding of the broad levels of Incident response (like having a base understanding of the steps of NIST (NIST 800-61) and some basic understanding of what to look for in logs/SIEM alerts was helpful there were a few other things I needed to work on to build up better base, which I’ll list below.

  • Logs-Don’t focus on a specific log source, be knowledgeable with a few different logs sources and how to read them. I know at the old gig I had more Firewall logs than I could shake a stick at so I wasn’t really sure how to read/fully interpret some of the other logs. When I first started here my auto pilot response was to look at the firewalls when I was going after my first “incident”. Boy was that a mistake. Not all firewall logs are created equally (some could be stateful versus next gen). Some situations these could be useful but for me a better base was understanding Proxy, VPN, Windows(Authentication, DNS and of course events), EDR to better tell the story. Which your SIEM might be able to do , but depending on the amount of logs hitting it and events/alerts you might be better off looking at log aggregation (like ELK) to really filter and get into the nitty gritty of what you are looking for.
  • Linux/SQL/ELK Command line/searching-While I could do basic navigation on the command line, I couldn’t grep/less and use switches for the life of me. Still learning and getting better with switches and some piping, this can also be sql from the command line as well. I use grep a lot when looking at endpoint data and log files from linux host so not being afraid to try different switches/pipes and using grep/less to search for certain things along with understanding how to search for data in telemetry that was an sql database (and can be useful also for things that use osquery…looking at you open SOC CTF). Also I have some basic stuff with Elk and command line now, still need to work on regex for even better filtering/searches to get rid of some of the noise.
  • Net flows/PCAPS– Grouping these two together as I generally grab PCAPs from specific net flows. While net flows were an issue at old gig, we use flows a lot with NSM, being able to read the data that the flows are giving you along with deciding if it is worth pulling the pcap from the NSM sensor to get information about an incident. I had done work with pcaps before from a network troubleshooting standpoint and light IR but taking advantage of looking at the TCP/HTTP streams to see what is really going on with the traffic versus what proxy/firewall’s are saying in terms of the communications. Also being able to look at the HTTP headers to see what might be going on and also knowing how to pull data from the HTTP streams (useful to pull downloads that might of came in from IoC’s or C2’s for further analysis/sandboxing).
  • Report Writing/timelines-While everyone can write notes and other things (if not, you should really be keeping your own notes outside of Incident tracking/SOAR/ticketing system’s incase those are unavailable in a critical incident. You can also quickly look back at them for repeat offenders for what to look for). However, being able to write something that is more in-depth and technical versus an executive summary is something that takes work. Summary’s aren’t going to need the same level of depth as a full technical report. Also keeping notes on the timeline of activity is also good to have in order to help build timelines (since they can help with the summaries as well).
  • Understanding dynamic analysis outputs from sandbox tools– Was tempted to call this system internals but figured this made more since as I know about LOLBINS and .net assembly getting dumped into memory but understanding the connections of what is going on and the associated actions and being able to explain that in both technical/non-technical way (again, the writing thing). Along with being able to tie that information back into the the endpoint data/and searches to help find artifacts and some of the technical happenings on the host.
  • Scripting– This is still a weakness of mine (mostly regex) and something I’m still working on as it is a “use it or lose it” skill. Since I’m not scripting a lot of the time my limited abilities go right out the window a lot of the time. Still working on my python and Powershell to a lesser extent to help get some of the easy info I need for stuff to get pulled.

There are just a few of my key things I’ve noticed with myself. Above all else though is not being afraid to make mistakes. I know with IR work you want to do it right 100% of the time and want to keep the $org out of the news, but if you are afraid to try things you will not learn anything and become better. Same goes for trying tools/technics find what works for you and is in scope with your org and just do your job to the best of your abilities. Also do not be afraid to network and ask question’s inside/outside your org.

These are my keypoints, I’m sure there are others who use different tools/technics/procedures but this is what I’ve really learned over the past year. I know I still have more to go, but that goes with the IR territory. Keep learning, Keeping trying to find the 5 “w”‘s with what you know or are trying to learn. Ok so this actually the most important-use your PTO/Take mental health days when needed. Your no good to your org if you are always burnt-out and depressed so take care of yourself!

Is this thing on?

Things have been a little, well nuts. With everything going on and starting a new job blogging definitely took a back seat. Truth be told I haven’t had anything super interesting to write about, though in retrospect I could of maybe done one about the virtual Wild West Hackin fest, as it was a good con. Maybe I’ll try to get something done by end of the week.

I did finally get my laptop back up and it dose have detection lab on it now and I will say, setting it up in Virtual Box was super simple. So I think I’ll start doing some tweaks with that and using it to go though some PCAP and other detection tools. Along with using it as an add to study for CySA+ and Security Blue Team Level 1.

So long and thanks for the Response: why are we skipping post incident activities

While re-going though some Incident response topics with Security Blue Team, Cybary and LinkedIn Learning there was a common thread that they all had if they were going though NIST 800-61 (or even SANS standards) was NOT to skip the post incident activities. business is back up right? who cares? we got it all solved so post incident activities means going out to the local watering hole and having an adult beverage or six. That is what you mean right? No. not at all (although if you wanted to post post incident activities I guess this could work)

Now your might be thinking why well if your are a fan of standards/frameworks (If you are in Cyber you probably are since you do have compliance you need to meet among other things) there is this hole step that can also improve the process for next time, as it is going to happen as someone is going to have an off day and click yet another link to make your life full of fun. Also you wouldn’t hire a Pen-testing firm to come in and not review the findings and look for ways to improve the organizations security posture? You wouldn’t do Red team/purple team engagement with tools like Red Canary’s “Atomic Red Team” or MITRE Caldera would you? If not what are you really getting out of it other than doing a task to check the box for something which is not going to do anything to help you or the org (except maybe if you want to give yourself a gold star). You use these tools and skills to look at oversights, detection/alerts that are missing or not as clear. Which you would work to fix and to help your cause you could map this to ATT&CK/Shield (as we all know leaderships likes charts and frameworks are super helpful for some).

So the question is, we make time for that? why are we not making time for the post incident review? I get it. its another round table, another day potently spent going over process/procedures but if it makes your security posture better by looking at the things you would with a Pen-Test or purple teaming, why skip out on it. It’s like stomping out a fire without making sure it’s embers are fully out. In this case you don’t improve the response to a particular incident guess what happens? those embers (incident) rises into a flame and next thing you know you are dealing with a similar incident responding in the same way. Did that really work? sure you put out the big fire, but you did not ensure it was fully out because you are still getting hit in the same way you did before. now if your org is not giving you the time or capabilities to fully put out the fire, you probably do not have full management buy in into your incident response program as that should be the priority for those on the Computer Emergency Repose Team (CERT/Cyber Security Incident Response Team (CSIRT) when facing an incident.

I do understand that most are working with limited business and that security is just a coast center (unless you are an MSSP or something among those lines). but how can the business really function under normal operations if you still have kindling flames of incidents (attack vectors that your IR plan/policy lacks coverage for) if you are constantly wasting time going back to “remediating” and leaving it at that. Maybe its the military and having to instruct a few classes that thought me the importance of After Action Reviews (AAR) but not doing that security incidents and then wondering why you are dealing with the same attacks over and over again fits the definition of insanity (doing the same thing over again and expecting a different result). Doubly so if the business gets effected by this time after time, you would think they would want it to be prevented to prevent any bad publicity and having to issues statements over and over again.

Taking the time to try and close the gap with post incident activities ensure senior leadership understand that this is apart of the process and ensure that it is the plan/policy. if they do not want to listen and want you to move on to other things remind them of their buy in/enforcement. If not, use ethics committees and other tools to ensure policies/plans/procedures are being enforced and allowed to be followed though to the end, the very end. At the end of the day this is to better the security posture to allow an org to keep operations up by having plans in place to avoid shutting down production due to a cyber incident that effects the CIA triad.

I know not every org is like this, but when going though videos and other training the theme that always seems to be around is “DO NOT SKIP THE POST INCIDENT ACTIVITIES” there is defiantly something up with the industry and something we need ensure we are doing. Along with doing IR exercises (if you want some gamification of IR tabletops look at Black Hills Information Security Backdoors and Breaches card game) But I’ll save that for another time.

DEF CON 28-DEF CON Safe Mode

DEF CON needed to Run some A/V…Image (c) © DEF CON

First DEF CON, first full Virtual conference I attended. This post is going to be in different parts with with the overall experience, the things I really liked and kind of an simple overview of the OpenSOC CTF (which I thoroughly enjoyed). I am still trying to get caught up on some of the varies talks from the other villages as a lot of my time was spent working on the OpenSOC CTF and the various workshops/panels with Blue Team Village (BTV). I was hoping to do a better write up but life found away and I wanted to try and spend some extra time with my wife since me being “Out” due to DEF CON put some extra strain on her

What I really liked?:

  • OpenSOC CTF: This was my first time doing the OpenSOC CTF and was not sure what I was fully getting myself into as I was used to the other jeopardy and other styled more red team focused CTFS with web-apps to break, Stenography, and some Forensics challenges but what really captured my interest was this was more like a full fledged SOC environment and you had to investigate several different incidents. The tools I have never really used before or interrupted (PF sense, Thinkst Canary tokens, OSQuery, Suricata, Greylog, Snort, Velociraptor, and Moloch). Thankfully the Blue Team Village had some early “workshops” showcases the tools and how to use them. While learning the syntax of each of the different tools (thankfully Greylog is built on top of an ELK which I do know some of the Syntax for) But once I learned the syntax and got a better idea of what I was looking for it I was able to do good on following the trail of several challenges. Which they did lead on each new “flag” was just adding another piece of the puzzle and told the story of what you were looking at.
    The networking portion of looking at the events/logs from firewalls/IDS/email I did really well with (along with packets) but once we got into the nitty gritty of filtering though Sysmon (thanks to WinLogbeats) and other endpoints I started to falter a little, I know its an area I am currently working to improve on with DetectionLab. I was able to figure some events out (like power shell script running and what script was ran and was able to pull some “IOC’s” but what made this not fun, was trying to bounce between this, the workshops I signed up for and the various talks/panels…and you know eating XD (I know it was DEF CON but I’m not about that life).

    I was not expecting to win or even go top 10 doing it solo but for my first time using the tools and finishing at the lower end of the 50th precedential or so I’m pretty pleased with my self and I got some weak spots I need to work on and that is perfectly ok!
  • Workshops: While there was some bad as it was a FireHose at some points (and one being around 10PM EST) but these really were the bread and butter other than the OpenSOC CTF. Every since I did a splunk workshop at GrrCON last year (it was really a guided Boss of the SOC v2 AKA BOTS v2) I try to get into workshops. The ones I really liked were creating Jypter notebooks playbooks for threat hunting, NSM in the cloud (deploying Securiyt Onion on AWS..noice). and also writing Yara rules. I know I need to look back on the Yara rules and Jyper notebooks as A) I was really tired and B) in the middle of a good flow with the OpenSOC CTF XD.
    Both are tools I want to get better at using/understanding to help automate some things and to build up better rule sets (in the case of Yara). There was also one with IR which was good, but it was like a CTF and I’ll guide you (which I need to find the link for as it is always up) Thankfull the slides were shared to attendees so now that things are calming down a little a gain its time to go though them again (doubly so once I finish my Detection Lab set up as I also want to add RITA from Active Countermeasures and maybe a metasplotable or OSWAP Juice box vulnerable web app) I can start really applying these and playing around with them. That is the con that if you don’t use it right away, you lose it and this was really the case as back to the normal grind of $employer I really didn’t get a chance to work with these as catching up and other stuff I didn’t get the chance to review/play around like I wanted. If this wouldn’t of been so delayed it would of helped too XD
  • Talks/Panels: Part of the reason why It’s taken me so long to write this is I am still trying to catch up on talks from all the villages. I know being from the Metro Detroit area I spent some time getting some entry knowledge with automotive hacking and some of the basics of automotive networks which I was clueless about, it was good that it was pretty cut and dry simple and wasn’t overly complicated. I really did enjoy the panel talk on IR. As someone that has a rapidly growing interest in DFIR/Threat Hunting/NSM and more active defense (Decoys/Honeypots) this was really informative to listen too from people that do that for a living and get some insight. Another good talk was about BlueSpawn It was funny we just worked with this with on my intro to securtiy course with John Strand/Black Hills Information Security. It was neat to get more background on the tool and part of the reason for its development. If you don’t know Blue Spawn is an open source active defense/EDR product based in Powershell (at lest for what we used it for with the course).

not bad in itself, but made it hard

  • Discord…It was nice for all the Villages and DefCON to have seperate channels. however, the alerting was so distracting with people posting in all the channels. I tried to mute were I could, but then I’d miss things or just ignore channels all together (Sorry Red Team Village). Also I’m “that guy” that has the taskbar on top set to autohide (its clutter) so everytime there was a new ping start menu would drop down and just be distracting. Granted I know this is small and really it was nice to have it “break out” like that and now they are live communities still even after DefCon. But for someone that gets distracted pretty easy or annoyed with things, this was a big no from me XD.

I know this was long over due but with work and trying to take care of some projects around the house due to DEF CON and weather that needed to be taken care of. This was probably the only way I’d be able to attended a DEF CON as I’m not sure the wife would be thrilled with me taking PTO to go fly to Vegas to attended this. Maybe if I can talk to my $employer/contract house and see if they can cover some of it. Maybe life might find away and I get head out that way and maybe score a badge (or a BTV one as well)

On-words to Wild West Hackin’ Fest in a few weeks!

Wild West Hackin’ Fest/Black Hills Information Security-Intro to Security 0(4) day

A few days later as been pretty busy with some house work, and forgot I was on call this week and had a fun situation to deal with on Friday. Day 4 was better sweet as it was the last day of training and this really started to get a community vibe to it. To be honest it feels a little odd not interacting with everyone. We started today off with some fun! Nmap and using responded to get LLMNR passwords and John the Ripper before moving on to allow list (versus Deny list, which I have been living on and make me sad). Malware and allow list, open DNS and domain name (which I had to re-watch as I missed out on this as I had an appointment for an hour…part of the reason for the delay), Vulnerability management, threat emulation (again) before some DevSecOps/SDLC/Webapps and automated testing with ZAP! There was some breif talk about AD hardening, Plumhound (Blood hound for purple/blue teams). Breif talk on Mimikatz/Ping Castle, Cyber deception and some last little bit of off roading with Active Defense Harbinger Distribution).

Deny list do not work well, is A) you generally going to be behind trying to add “new” IP’s/Domains is that the internet is HUGE, and generally once those “new IP’s/Domains get added, they are already old and attackers might of found a new site to take over/hijack. Not to mention even with ALL the great user awareness training someone is going to have a bad day and click the link. Going back to the other point it is super easier for attackers to get domains by buying existing or expired domains. Instead Allow list categories (as adding ALL the sites by themselves is a bad idea). You can be compromised with a legit/”good” site with drive by downloads (malvertisement for example were the dropper or a bad link could be place in an ad on the legit site and next think you know, you are talking to a C2 Site/Server). There was a good conversation about using OpenDNS for home use to help allow list to protect yourself at home, Which I have been playing around with that and tinkering with a PiHole (but I still need to talk to my wife about getting our own cable modem/router as PiHole and ISP all in ones do not mix..). There was a pretty good discussion on DNS over HTTPS while its good that it is now encrypted and protected, but the issue is that defenders in the enterprise is you lose visibility

John had some really good tips with vulnerability management and still running same program as they were 10+years ago. Vendors have not changed and test/scans for internal/external vulnerabilities. Seems simple right? who needs to think about it and no real new innovation at all. But John also noted to use authenticated scans (which is a risk, as that account can have read only into the servers and other assets that it scans and uses that account for, and if it gets owned, you can be up a creak without a paddle.) Do it, having done some stuff with Tenable/Nessus un-authenticated scans look like a glorified NMAP scan and really do not give you much. You are better off authenticated OR if you can support it using agents. John went on what we called a STRANT or a STRAND RANT, Which can be summarized with the snap from the slides

AKA Garbage copy write Black Hills Information Security/John Strand.

You need to see the context of vulnerabilities even with Low/informational issues as attackers might start with those low/information issue and use them as the straw to break the camels back, and gain access via the medium/low/informational. You need to look are more than an IP list to freak out, a good slide again (not going to share this one) but you want to break it up by IP address to help make it easier to look at and go by patch and fix that computer with that IP…and is stupid and don’t do it. the magic?

GROUP BY VULNERABILITY-NOT IP ADDRESS (AKA the crap the report spits out). instead of worrying about the total vulnerabilities you have to deal with a few that repeat on certain systems and use the tools to focus on that group of issues. IANS faculty have used this (versus the listing by IP) and addressed over 1 million IP address, all vulnerabilities in less than 3 weeks. To help think of this vulnerabilities as more than missed patches or bad configs, think about what could happen with post exploitation, what happens after an attacker gains access? this is were threat emulation can help direct with the low hanging fruit and get those vulnerabilities address. I did miss a chunk of this discussion and it was brought up that vulnerability scanners are getting hardening tools/tips and can also implement CIS benchmarks in the case of Tenable for example, but they still have a long way to go and do not work as well as one would hope.

When talking about Software Development Lifecycle (SDLC) it was talked about how security is just bolted on at the end, which frankly does not work at all and just leaves issues. One of the problems is people want to do it quick, cheeply and because it takes time (because security is hard) versus having it be done along with the software development. Fun fact is that most security testers know less about development as it is a different skill set, and it is easier to teach a web developer some basic security practice using free tools that can be used. Using tools to test should be done by a different team member (it helps to have a different set of eyes than the one doing the development). The tools are easy to use and should be done weekly as a best practice (even better if done nightly) and it will make you a better developer because you will start having code/applications that are less vulnerability and you will become more effective.

Don’t worry about testing for the crazy new hotness 0 day vulnerability get the low hanging fruit like:

  • Cross Site Scripting
  • SQL Injection
  • Command Injection
  • Misconfigurations

This takes away a large suface area and gets rid of the easy attacks. Most attackers are going to look for the easy way in, and not try complex attacks unless your dealing with an ATP (nation state group) or someone who is targeting the company, which if you have proper NSM,application/server logs, logs from perimeter network defense (like Firewalls, NIDS/NIPS, Web Application Firewall (WAF)) you might notice someone knocking on your door. While these tools do help test a lot of things, they do not get logic errors, permission errors, stored cross site scripting, cross site request forgery as they need manual testing. John also brought up a good point, self test and fix the easy vulnerabilities before you get an external test. Do you really want an external test done were you get handed a bunch of XSS (Cross site scripting) attacks or rather make the testers look for the logic errors or harder issues at hand and allow you to get more value for test. Really though these self test should happen on the regular. what are those tools? Burp Pro is awesome (and pretty cheep) and of course everyone’s favorite OWASP Zed Attack Proxy (ZAP) and went over laps on how they are used and set up (which since there are free version of burp and ZAP is open source, there are plenty of resources out there for getting started with these tools.

We then started to wrap up to the focus we need to stop focusing on “can we be hacked?” to “what can we detect?” start with finding gaps, trying to fill them and move on. John noted to “steal this idea/framework” and use these tools to at lest get started and to show worth if you want to spend the $$$, but lets be real, how much budgets to must Cyber Security/Information Security teams have? (news flash not a lot, unless you have an organization that has A)either gotten breached, or B) has a good security culture). Before warping up we talked about AD hardening and using a tool like plumb hound to look for ways to harden AD, and talked about Honey accounts (spoofed domain admins for example) which lead into the pivot point of using Honeybadger which is apart of ADHD. John had one last slide about Threat intel and how it should be your AV/Firewall/EDR vendors doing that work and to make actual threat Intel with things you are noticing on your decoys and were threat emulation found weak spots use that information to start hunting in the environment for possible attackers.

I’m slowly plucking away at getting my lab set up were I will hopefully have ADHD set up so I can write about that..uh..fun adventure. As I already put the firewall I set in transparent operation mode and pretty much killed the internet in my home, which lead to an annoyed wife :P. Still made add some things to this Lab that John gave us (like wirehsark to look at the actual traffic going on and get better with wireshark..I’d say TCP Dump but Linux subsystems…)

If you enjoyed these post and wish you had taken the course. Good news!! its happening again in November!!!! Still the pay what you want model same instructor with most likely updated coursework/labs/slides. Sign up here!: https://wildwesthackinfest.com/online-training/getting-started-in-security-with-bhis-and-mitre-attck-november-0-395-16-hours/

Wild West Hackin’ Fest/Black Hills Information Security-Intro to Security 0(3) day

Today we shifted from logs and NSM and pivoted over to the endpoint with Advance Endpoint defection (Think Endpoint Detection and Responses…aka EDR). Or for you new TLA XDR or Extended Detection Response…but that covers more than just endpoint..and is above the scope of this blog. Gee I just got started with this and I’m already going off course…ANYWHO. We also looked at how to test what your end points are able to detect with tools like atomic red team, bloodhound the labs focused on today on using blue spawn (open source EDR) with atomic red team and even our little “exploit” we created back when we tested in app locker. There was also a touch on using host based firewalls and segmented networks (even by endpoint!) and a touch about architecture and needing not just defense in depth/layer security but overlapping segments. So you know your weak points and know what coverage you have is something fails.

Like mentioned before EDR is better than your traditional normal run of the mil AV and standard endpoint defense , which isn’t super helpful (though I guess EDR is going to be coming more of a standard). EDR products look at asset holistically and looks at processes and connections, which in DFIR world, is a huge advantage because you have a the chain of events that happened on the end point which in turn can help with the whole cyber kill chain. EDR can need some tuning work depending on certain processes that are being ran by system admins. Now you have an EDR solution and want to make sure it is detecting/alerting/monitoring things. This is were threat emulation can come into play, even if you are not full on red teaming but this can be useful to see how your EDR solution is working, or even other products within your environment as well. Instead of just the normal pop a vulnerability or missing patches or everyone’s favorite miss configured services, it goes into what happens after an attacker gets access (if you think your not going to get breached, hate to share bad news, but its going to happen). This help with lateral movement, different processes that could be used to try and escalate privilege, or infect system. I was really excited to do these labs with Blue Spawn (free open source “EDR”…not useful for full prod but for testing out the tools coming up, super useful).

We discussed Caldera which is created by the folks at MITRE (aka the people who created ATT&CK), Atomic Red Team, and Blood Hound. Caldaera and Atomic Red team can on assets and if you have your EDR/End point protection services in monitoring/alerting, While Bloodhound can map how an attacker could get admin/full domain admin permissions in your network. It was great to play with Atomic Red Team (I say that as I am currently wearing one of their swag shirts) and see how it works. The one point John made is not to be afraid of running these tools, even if it borks/breaks things. It just means you are doing your job even in an IT role of you don’t have a “whoops” moment (I myself have taken down our SIEM at work…was a good time). The other point is if you were worried about back doors in these open source tools, how many back doors have been found in other products? firewalls, endpoints, networking equipment all have been found with Backdoors on them. John also mentioned that while these tools are great. Don’t focus on the ATT&CK Bingo and blocking all the ATT&CK. Attacks change and with a few changes your not detecting it and bypass. This is were the commercial offerings can come into play (like Scythe, Attack IQ) to go around the basic ATT&CK building blocks and can use customer attack methods to check what you detect.

The last section was host based Firewalls, if your not segmenting your networks, plz start. All the way down to your desktop and between subnets as pass the hash/ticket and SAT impersonations have worked. You need to assuming you are going to get compromised/pwned (for real). What is really bad, attacker persisting and being able to move laterally/ and pivoting systems. John had some good images showing different things that might not show alerts (not going to copy as an encouragement to take the course in November). You can even just use the default windows firewall, but news flash: most of your endpoint protection vendors have built in firewalls as well and can be centrally managed and are far easier to be used than the netsh advfirewall.

Remember with all of these, think about how they overlap and look for potential weak spots and look at how they can mitigated and ensure you have overlap going over the endpoint/assets to help have a good basic security architecture. I do like how John did break it down into overlaps versus the defense in depth. As normally people think firewall, IDS/NIPS, Endpoint protection but think of it like this:

Chart that John Stand had to show overlap between Network Security Monitoring (NSM-Sec Onion/Rita, Firewall alerts/signatures/traffic), netflows) Securtiy Incident Event Manager (SIEM), the Combo Plater of AV/Endpoint Detection Reponse (EDR) and UBEA (See Blog post from yesterday…User and Entirety Behavioral Analytics (UEBA, or noted as UBEA here)

I think this is super useful to think about versus the castle or other methods. We also did a quick talk on PVLAN’s were the firewall helps control access to the vlans (I need to look at this more).

Since the Nmap lab and Shodan off roading adventure is almost over, Time to get back to paying attention today to the last day. Doubly so since I am going miss an hour or so due to doctors. Thankfully some of these last topics are stuff I do with my day to day so it wont be to bad 🙂 with a hopeful bonus section on using ADHD 🙂

Wild West Hackin’ Fest/Black Hills Information Security-Introduction To Security-0(2)day

Was hoping to get this up after the course in the evening after some time to reflect, but I had to do battle with a lawn mower XD. Today there was a 20 minute pre show banter going over Metasploit/Meterpreter as it was used the other day and some information with lanman and NTLM password hashes before moving on into all the fun with LOGS (Do you have the log theme from Ren and Stimpy stuck in your head? I hope so)

This is sure to help!

While that was only a part of the topics we covered they were egress traffic monitoring/visibility before moving into user entity behavior analytics (Or for you acronym types UEBA…and for you QRadar types, the UBA). Lots of stuff was covered, the thing that made me happy was no “YOU NEED FIREWALL LOGS!!!!!” (Sure maybe in a log aggregator with more critical events going to the SIEM, but to flood it with single SSH connections….talk about a log toilet for your SIEM). The egress traffic mostly dealt with flows and PCAPS to look for long connections/beconing (or heart beats/jitter to C2/C&C servers) and looking at how big of a pain it is to set up proper logging with Windows, or you know…use sysmon and save yourself. There was also a breif talk about Sigma rules (which I also found uncoder.io to help translate sigma rules or other SIEM Rules (for the quick TL;DR)

The first key point we talked again was the egress traffic monitoring, which helps fit the bill if you are using the c2/C&C (Command and control) and exfiltration on MITRE ATT&CK. We need more than alerts (it really isn’t enough without the proper context, As someone that works in a SIEM a lot…can I get an amen?). by properly monitoring/logging of the egress traffic you can help see other OT/IoT/Shadow IT devices that is egressing so you can see what you mgiht be missing with an asset inventory or to spot check where NAC (network access controls) Might be failing. When you think Egress traffic a lot of people think “The firewalls” which is nice to see that traffic, however do you really want to fill your SIEM with just Firewall logs? Because it’s what you are likely going to do. That being said you need a balanced mix of network and host based data. A good example of this would be having your IDS alerts, any threat logs from Firewalls, a network flow (like Cisco’s netflow) to see the simple traffic (Ip/mac/port/basic packet info), and logs from the end point or server. Another helpful tool we talked about it Zeek (formally Bro) which is an open source network analysis frame work. Helps with constancy, and because open source has a lot of support. It also Helps with timestamps, which are key when doing analysis/network forensics and helps get proper log files and to see what is really going on versus waiting for the typical signatures (aka last weeks attack). Zeek can also takes full PCAPS for analysis (which you can look at if you want to use other tools which we covered later).

There was a brief on Hunt teaming (red team) with actively looking for threats and the joys of going though Logs (see above image) and using Active Countermeasures free tool RITA (Real Intelligence Threat Analytics) to look for long connections/beacons and can even check for Blacklist DNS as well. We than talked about Full PCAPS (which can put a lot of strain on network traffic if not being done carefully with the right tools). Everything supports PCAPS, there is a learning curve (a good course is the Security Blue Team intro to network analysis course which covers both Wireshark and TCP Dump and gives you PCAP’s to play with LINK I am sure there are others as well). It was nice that they showed an example of how to set up RITA to capture the traffic.

From this we pivoted into User agent string examples that you might see with some of the traffic captures that might be long/repetitive connections to Microsoft services for example or web browser and JA3 for profiling SSL/TLS clients. Because security does involve math (yay…)Long Tail analysis was brought up to look for anomalies and outliers (If you went to school for Cyber/Information security and thought taking a statistics class, I got news for you, you use it a lot). never try and find the “needle in the haystack” but look for the odd traffic and possible anomalies. John than gave a shot out to Security Onion, which is the next thing I plan on setting up for a lab to help monitor what will be going on with my purple team lab and my normal home network). Comes with Zeek, Suricata, ELK and other useful features.

UEBA, Hurray! (which means if you were in the course, this blog is almost done XD). Just a reminder. There is no “YOU GOT PWNED” Log and only 5% of detects come from logs and traditional windows logs are not super useful for security. You can use tools from JPCert Tools to try and help, but that can be a lot of extra overhead and different tool usage. Why do this? These logs can help tell the story with AD, Exchange, OWA and other system/system access. News flash: This requires Tuning, as security professionals not red teaming this is our job. If you just set up a blinky box and expect it to do all your work its not. things are going to get missed (because blinky boxes like signatures or certain events). A good example is internal password spray. One ID, accessing multiple systems However this brings into view the “False Positives” (which is a rant I have with Qradar with the tuning calling things False Positives” its not really a thing, it’s not normal traffic and not tuned properly, think system/service accounts trying to log into multiple services (think that internal password spray example). System admins, scripts *Groan* and back ups. It is why tuning is so critical, if your managers, CISCO/CIO don’t think it is important, there are tons of resources that point to the need of this, and is what a security team should be doing, and with that I’ll get off my soap box.

UEBA can works by stacking, like cards. a user log on +1, log off -1. Set a threshold (6) so if the user tries to log in 7 times or uses bloodhound they get A lot of log on attempts and should be some sort of alert generated. As much as I have a dislike with AI, UEBA can use AI to help learn what is normal and help with baselines, so it can have the basic logs of account log ins or data transfers and when user pulls a gig of files and is now pulling 5…you might got a problem (and can help with USB connections and other things). Now that we have logs, how do we get them. All of them, systems, servers, services, network *insert log theme here*. Getting the right logs takes time and is a pain and there are many factors that need to be taking in that can align with risk (data classification of data on asset, critical business assets among other factors). John brought up a good point with getting AD/Powershell/Command line logs. Its hard and its noisy, but what’s even harder? Not having UEBA or not having it at all. The logs can be used with tools like Logon Tracer to help see the movement across the network. Other helpful tools are DeepBlueCLI to help with a portable UEBA logs, while sucks for a full enterprise deployment, it can be useful in a lab setting to see what logs you need, or a nice tool for your DFIR toolkit. Same with DeepWhiteCLI, which works with parsing Sysmon event logs and grabbing the SHA256 hases to get process creation, driver load, and image load events. There was discussion with enabling windows event logging with important event ID’s but discussed setting up Command Line Logs/Powershell logs….a long procces, which to TL;DR is a long process. OR you can make your life easy with installing sysmon. You can install it locally (and SwiftOnSecurtiy default config install is a prime example). There is a great article from syspanda (https://www.syspanda.com/index.php/2017/02/28/deploying-sysmon-through-gpo/). To help deploy it enterprise wide via GPO. Another helpful service if your using ELK/Elastic is winlogbeat which does a nice and useful thing and sends all this info over to elk! And it can be messy going between different SIEM rules which is why Sigma is brought up briefly for sigma rules (which you can convert to the SIEM flavor of your choice, though Sigma is meant to be a non-vendor specific SIEM Rule). There was also a slide on exchange loging (and a best practice to enable log file and ETW event and the max size). We than did the Deep Blue CLI lab. I cannot remember if we talked about it but there was a slide from logon Tracer that focuses around 6 even ID’s

  • 4624-Loggin Success
  • 4625-Logon Failure
  • 4768-Kerberos Authentication (TGT Request)
  • 4769-Kerberos Service Ticket (ST Request)
  • 4776-NTLM Authentication
  • 4672-Assing special privileges

While I started zoning out as my system admin experience is limited but with the in detail how to log AD, Command Line (as a lot of malware/bad things run ping, tracert, and netstat), and powershell I was in the firehose effect and there was some about about Kerberorusting and other attacks as we started with Password Spraying.

I got a little lost at the end as I was trying to help out and there were SIEM questions and I’ll admit I was over logging at this point and agreeing with John’s point of view of SIEM and not fully knowing how much you log, and where you log from, who tells you what to log and finally what percent of those logs have an alert/signature for them. As someone who knew the SIEM at $employeer was a dumpster fire, I jumped in and got my hands dirty trying my best to fix the charlie foxtrot and learning a lot along the way as there are a lot of questions like this and I am trying to be more active with giving back and really seeing how much I know (and help my learn it better by trying to teach it)

Time to get get a little pump of coffee to get ready for today’s pre-show backdoor and breaches demo!

Wild West Hackin’ Fest/Black Hills Information Security-Introduction To Security-0(1)day

I was debating on doing this as a end of the week post with going over things that were discussed and things that I learned, but the shear amount of stuff getting covered i feel that’d just be a huge wall-o-text and need a TL;DR. While this is staying pretty wide over 11 topics that if you are doing, and doing well are a great starting point you can tell the team at Black Hills Information Security (BHIS) lead by John Strand there is still a lot…just not the firehouse SANS type or other “boot camp” styled courses.

Today the first day of the course which of course started with John going on some of his uhh…rants about treating your internal network as hostel (ie treat it like your local coffee shops network), don’t just use one security vendor/product, and compliance, which are ones I agree with in terms of compliance and just checking the box when really you should use compliance/audits to help push the org security posture. I know its easier said than done though as Security is a cost center so you need to apply proper risk to sell the point of doing more than just 7 character passwords for PCI-DSS as an example (even though its less of a requirement than the NIST green book from the 80’s…I Digress). We talked about the 11 controls/Key Tracking Indicators which they call the atomic controls they are:

  1. Application Allow List
  2. Password Controls (Good ole IAM)
  3. Egreess Traffic Analysis
  4. UEBA
  5. Advance Endpoint Protection
  6. Logging (which I noted properly, not just a log toilet for a SIEM)
  7. Host (endpoint) Firewalls
  8. Internet Allow List
  9. Vulnerability Management (done properly based on Biz/org risk and actual asset inventory)
  10. AD Hardening
  11. Back up/recovery

I’m sure to anyone who has been in the security business will know some of these and also tieing some of these things into the MITRE ATT&CK Framework (which was a good point to not play ATT&CK Bingo/Jinga)

What was super ironic is right after this we pivoted to compliance. Learned about the useful tool of audit scripts to help with auditing with various compliance standards and see where you stand. Fun fact, everyone in the course went to the site and we may or may not of DDoS the site with a hug of death. This can help break things into smaller frameworks and can cross reference into your other core frameworks

For me the next part with the Application Allow list hit home, as dealing with doing Deny list on Proxy is a pain in the butt, I didn’t even think about it from an endpoint prospective. I watched this Live at a GrrCon talk were Dave Kenedy just updated Magic Unicorn and just change a character of text and he was owning his test windows box as windows defender and other products didn’t have that signature for the deny list. it was brought up in the course that attackers are breaking up power shell into different parts of scripts now. It was cool to look at Ghost writing aka making a ruby executable with meterpreter, make it a .asm file, edit the asm file, convert back to .exe and you are an infosec wizzard because you edited something in assembly (john’s joke). We also talked about encoding and AV bypass with encoding and obfuscation (see above with GrrCon example). We then went on Application whitelisting using Whitelisting Directories (simply only allowing applications to run in certain directories). While can be bypassed many initial access attacks (drive by downloads) are executed from the downloads, desktop, or temporary directories. Hash whitelisting was also discuses, but the ease of implementing and keeping up to date is a pain, just like Digital certs as not all vendors sign all their .exe or .dlls. For this course we used AppLocker (which I’ll admit it has been a minute since I heard anyone mention this). Native to Windows can whitelist and/or deny based on Path, Ash, Cert, vendor. We created a simple policy just with the defaults of allowing the program files, and windows directories. The on thing that could catch admins off guard is needing to turn on the windows service “Application identity” on the local systems (I’m assuming you could push this out from your AD network if needed) to be enabled. You also need to push the GPO out, as was a issue the demo gods did not like as you need to wait for replication between user accounts in our case.

The next fun was everyone’s favorite Password Controls (I can hear your groan from here). There was a good discussion on password spraying with everyone favorite <sesson><year> (I’m sure password or other default creds can be used if you know any of the devices from any recon you have done as people are smart and forget to tweek those or system accounts). Of course this requires a harvesting attack before the attacker will plug the ID’s into burp and try their best. Once that account is pwned without proper monitoring or UBA it can be hard to detect. We talked about two tools (credking and FireProx).

The big issue that was discussed is that just because you have 2FA/MFA org’s still have password policies between 8 and 10 characters. This only can be somewhat good if it is 2FA/MFA across the entire board, if not bad things are going to happen as you have a 8-10 charter passwords that can get pwned in as little as an hour or so (or even faster if the attacker has a password cracking rig). A good idea for compliance and auditing is having regular scanning of authentication points (with regular penitent-John). Regardless of 2FA/MFA encourage the use of passphrases, Just a few random words that make a phrase. still for complexity and to throw off password crackers add that might be using dictionary attacks (using common dictionary words to brute force passwords) ensure there are numbers and special characters. These can be easier for users to remember as they can use dictionary words to make a random phrase they can remember. You can also use a password manager with a good passphrase and 2FA to help you or your users with passwords (it can help avoid the password reuse or just changing 1 character). But with a password manager if it gets breached, all your accounts could be compromised (again no worse if you use the same password for everything so a good first step is to stop that).

There was some basic 2FA (as mentioned above) aka something you know and have. can be tokens, SMS, or app based (all are better than no 2FA). It was pointed out how to attack SMS 2FA with SIM cloning. After this brief talk on 2FA (as you can talk vendors and different ways to set up 2FA and this is just an intro course after all) We talked about the bane of just about any IT professionals work life, Service accounts. AKA accounts that are used by products to do things, or in scripts were when you change a password you could borked production on a Friday and be the on call guy and get to have an adventure. These need to have passwords that expire and have lockouts. Even if it causes an internal outage, they are used by attackers as they can be overlooked. We then applied what we learned about passwords and how fast they can be cracked with Hashcat. I was so used to John the Ripper from School (almost 3+ years now…crazy) and doing Rainbow tables (which was discussed how now effective they are compared to lets say Hashcat). I’ll admit I will probably have a blog post shortly with some Hashcat adventures once I get my home Lab fully set up, and when its not muggy/hot in my study (its upstairs and its currently in the mid 90’s and muggy, I don’t need a 1RU server making it even more toasty up here) and get some time with that to get more experience with it. Even as someone who is more on the Blue team/DFIR side of the house, understanding how this tools work allows you to better understand the risk they pose and possible attack vectors attackers might use when trying to get password hashes.

We didn’t get to password spraying today so that will be done tomorrow. Hopefully I remember to pop in the 1/2 hour early to get the quick refresher, might be useful for poking my deception systems at work when I am back to work to see how they are alerting XD

Looking foreword to tomorrow though with egress traffic analysis, as someone that is more analytical I love looking at logs and trying to make sense of them and PCAPS and I already see the notes about Tuning with UEBA (which is part of our jobs like it or not if your more on a blue team focus).

Though wondering if we are going to make it that far tomorrow. Regardless of were we get there is going to be a lot of good info and some shenanigans along the way!

BHIS Webinar Intro to Cyber Deception

On 1/23/2020 I attend a webinar from the fine folks at Black Hills Information security on an into to Cyber Deception (aka Honey pots). As someone who might be getting shifted into a more proactive security role versus all the things I handle now I was really interested in this subject.

What really though me off was how John Strand tied this very thing into more applicable threat intelligence. As someone who was also intrested in Cyber threat intel. Mostly steaming from the fact I could of done intel when I was enlisted, but chose logstiics because it used computers…Anyway. Getting threat feeds and currently working with Minemeld at work as well to help automate some of our responses to known IoC. The interesting and kinda “bad” about feeds is that they only deal with IoC’s that have hit all kinds of industries and who knows how old they could be after IR and forensics teams got the info out, there very well could be old now. They also do not deal with active threats hitting your enterprise, so you could be blocking a bunch of threats that have nothing to deal with your enterprise and could have attacks going on not getting blocked or looked at. This really hit me as something that is useful being able to get an attacker trapped in a honeypot and alerting on a honey account (an admin type account that no one should have access too so it can alert on SIEM or some other tool of your choice). with this going on you can look for IoCs possible C2 servers and the possible attack vector being used along with possible source IPs.

Black Hills Information Security Honey Pot Linux distro Active Defense Harbinger Distribution (ADHD)

What was also good about this webinar as I learned about Active Defense Harbinger Distribution(ADHD). Similar to Kali or even Flare-VM with how they handle pentesting or reverse engineering, ADHD handles with setting up honeypots. The tools are welly documented and have a basic walk-though with each tool. I will need to get some time fully go into using this for testing on my lab to see how these function more. This will also be a combo as I can use this traffic I generate attacking these honeypots to further learn some packet/traffic analysis as well. Still before I fully go after this I think I might need to finish setting up my Security Onion server and my FortiGate firewall.

I’d like to thank Black Hills Information Security for this great webinar, I wish I had taken some better notes or had the time to configure ADHD before posting this, hopefully I’ll get some tinkering this week.

Link to the slides:https://www.linkedin.com/posts/black-hills-information-security_getting-started-in-cyber-deception-activity-6626562689147695104-hFbJ