Friday, December 31, 2010

"Twitter Segmentation": or why you need at least three accounts

After I'd been using Twitter for a few months (ok, I really don't remember how long it had been) I realized that I needed two accounts for a number of reasons: privacy, personal branding, and audience. Well, when I created @seclectech I'm up to three accounts. Here is why. Besides that I take my time on Twitter way too seriously.

As much as I love shooting the shit with folks as @frednecksec that is just one side of me. And arguably not my best side. I know it. It is the the snarky, sarcastic, cynical side that can't help but mock the endless FUD about SCADA and Smart Grid Security and the never ending drumbeat of Sinophobia and generally react by picking positions I may or may not believe, just for the fun of it, and just to mess with people.

But I don't want to engage in that all the time and it is not healthy either. Sometimes, I don't want to follow people that engage in that sort of nonsense (or cause me to engage in that sort of discourse) I just want to learn about cool technical stuff (whether security or non-security related). Folks like @jonoberheide @jordansissel @jedisct1 @rgaidot consistently provide solid technical information about the topics I'm interested in (and might be interested in) without the snark (i.e. security and technology politics) of other folks I follow on @frednecksec who might be more entertaining.

If twitter had better filtering options (for example that would allow me to block hashtags such as #stuxnet or #wikileaks or #tsa) I might not need to do that, but until that happens I need two different public accounts depending on the tweets I want to produce or consume.

Friday, December 24, 2010

How did your technical skills fare in 2010?

Well, my first full year at SAIC is coming to an end and it is time to take stock of what I've learned, technically, over the past year. I hope to have another blog on management, project, and customer engagement skills, because, truth be told and like it or not, I've grown more in that area than technically in the past year.

Now your day job shouldn't be the only thing determining the level of your technical skills, but it is obviously a major factor. In the last quarter, I picked up leadership/management responsibilities so I will mindful in 2011 to ensure that I at least
maintain the skills I've got and not become "soft" and hands-off. This point has been made crystal clear to me as I've been reviewing folks resumes (but have seldom interviewed them) that have let their skills go. A cautionary tale. And some that that I have managed to interview, I've heard the line "I could learn that again if I needed to" one too many times. Folks hold on to the skills they hold precious, even if you have to do it on your own time or change jobs. It is just too easy, especially if you are doing well and making an impact for your organization to let things go.

The last year has been no different that other periods. Historically, coding occurs in fits and stops. At the beginning of the year I was involved in a research project where I was doing a bunch of Python development with MongoDB, but that ended early on. Near the end of the year more light scripting mostly using dpkt, scapy, and nfqueue-bindings for some protocol testing testing that will hopefully end soon so I can stop the dreaded commute two times a week to Tysons Corner.

Networking & Security Products
I really enjoyed working with ScreenOS on the lower end Juniper SSGs. Most of my firewall experience was Cisco (or PF) so it was a refreshing change of pace. It was a bit frustrating at first, both the ScreenOS CLI and WebUI are preferable to anything from Cisco has ever developed. On the other hand, I did not enjoy working with Garretcom industrial switches, especially their screwed up way of configuring trunk ports and and VLANs. Not to mention their screwed up Flash UI. Working with one of the wireless clients/bridge/APs commonly used by some of the armed services was also a mixed bag, but I learned a little bit about RF surveys and the pros and cons of Mesh vs. Bridge vs. Client/AP wireless architectures. And then there is Air Defense, which I probably haven't learned as well as I could have but there wasn't enough time. In the lab, I kept up my skills up with IOS and ASA based SSLVPNs, but this was actually something I first learned in 2009.

A few of my projects have been true engineering (as opposed to assessment) projects that involved specific requirements (either that were provided to us or we had to develop ourselves for the client) where we were responsible for network/system design, configuring all these components and finally bringing them to deployment and handing the components over the customer. The "big enterprise" operational/IT skills I picked up at Hewitt definitely helped here. A couple of projects have required designing and implementing a new wired and wireless network infrastructure (as well as the appropriate C&A activities necessary to connect it) and this has been an interesting challenge, sort of like putting together a puzzle as you figure out to connect all the various L2/L3 infrastructure and security devices together.

Miscellaneous Stuff
Besides fooling around with MongoDB for about a month solid, I learned that some of the RAID controllers on Compaq hardware are lame and that backing up and restoring ESX filestores can be extremely slow and painful if you have SATA drives. Hands-on experience (mostly just getting it working and defining an appropriate security architecture vs. hard core device hacking) with several different smart meters, collectors, and headends, but nothing to write home about and certainly nothing I'd pose for a newspaper for in front of my hacking gear. Last, but not least, I've had the pleasure working with a great new client of taking a deep dive into a popular SCADA system and architecting a real time vulnerability management solution that will be deployed in the new year. Didn't learn any new products here but definitely a new experience of developing a solution from scratch.

It seems like I should have done more than this, but perhaps this is even better than 2010. I learned some new products, APIs, and wrote a bit of new code. I guess it could be worse as a senior engineer for a large government contractor I could be writing technical proposals and pricing non-stop. Fortunately I've been able to dip down and keep my hands dirty now and then, in addition to the normal QA (and tough questions) to keep engineers honest and on track.

So it's been a decent year, but in 2011 I need to do better, and this will be a challenge as there is no doubt that more of my time will be involved managing employees as opposed to just leading projects.Undoubtedly I will need to do work on my own time to keep it real. Stay tuned for some goals I will define for the new year.

Saturday, December 04, 2010

Greatest Hits from the Arce/McGraw Article on Cyber FUD

These guys nailed it in Software [In]security: Cyber Warmongering and Influence Peddling and here are my favorite lines:

The (perhaps intentional) conceptual roll up of cyber crime, cyber espionage, and cyber war into the scariest of cyber boogeymen exponentiates the FUD factor, making an already gaping policy vacuum more obvious than ever before
Amen, I still don't even know what "CyberSecurity" really means. Back in 2003-2004 when I first heard about it I thought it was a was for "non-security folks" (putting physical security folks in that bucket) to refer to IT/Computer/Network Security. But I don't know anymore. This conflation is confusing.

The problem with these kinds of stories is that they have somehow worked their way to the halls of policymakers who repeat them without critical analysis. For every careful Dan Geer there are ten shrieking cyber security talking heads busy stirring the pot saying things like, "We may call it espionage, but it's really warfare.
The "World's Greatest Hacker" is the least of our concerns because he isn't influencing policy in the beltway.

What makes us particularly skeptical is the intentional blurring of the lines that helped to distinguish the military, the intelligence community, and the cyber security industry — a direct result of US government pouring of billions of dollars into the burgeoning maw of perpetual cyber security initiatives.
Is it any coincidence that the cyber-euphoria coincided with the US economy going to hell as IT security vendors "cyber-ize themselves." There are quite a few Austin startups (you know who you are) that have become "Cyber Security Vendors" to get that Federal money.
They point out that those beating the cyber war drums the loudest are at least partially responsible for the sorry state of affairs in computer security. Retired Director of National Intelligence (DNI) Admiral Mike McConnell bears the brunt of this criticism, as do one-time NSA Director and Deputy DNI General Mike Hayden, and one-time cyber czar Richard Clarke. We know all of these men and they are all honorable and careful. Like us, they are all capitalists as well.
Anybody that goes after "Digital Pearl Harbor" Clarke is OK in my book.

Public/private partnerships pander politically but they do no real good. As it turns out, security is not a game of ops centers, information sharing, and reacting when the broken stuff is exploited. Instead, it is about building our systems to be secure, resilient, survivable, and trustworthy.
They go after all my favorite buzzwords. The public private partnership is when the vendor and contractors (and sometimes critical infrastructure asset owners) write all the policy to their economic advantage

In conclusion, this is article is a strong defense of defense and building security in. We should let the military and the Intelligence Community do their job and the rest of us (in the Information/Network/Application/Internet Security profession) focus on ours and stop trying to play "soldier hacker." Of course the irony is some of the biggest "CyberWar Cheerleaders" have neither a background in the military, the intelligence community, or Computer Security.

Sunday, November 28, 2010

Netflow on the Endpoint?

So you probably most commonly think of Netflow as a router feature (where you can monitor chokepoints to identify top talkers), but over the long holiday weekend I've used it as a way to monitor behind crappy closed source SOHO APs that don't allow you to turn off NAT. I've started running netflow on some of the Linux endpoints and just for grins I enabled it on my work laptop. On the various systems I point them at a single Netflow receiver, but on my laptop I obviously point them a local receiver.

On Ubuntu/Debian it is as simple as:

apt-get install flow-tools softflowd fprobe

Softflowd and fprobe both allow you to generate Netflow datagrams to send to a netflow receiver such as flow-tools. In both cases the receivers have single configuration files in /etc/default that allow you to specify the interface to monitor and the address and UDP port of the receiver.

root@e6400:/var/flows/2010/2010-11/2010-11-28# cat /etc/default/softflowd
# configuration for softflowd
# note: softflowd will not start without an interface configured.

# The interface softflowd listens on.

# Further options for softflowd, see "man softflowd" for details.
# You should at least define a host and a port where the accounting
# datagrams should be sent to, e.g.
# OPTIONS="-n"

root@e6400:/var/flows/2010/2010-11/2010-11-28# cat /etc/default/fprobe
#fprobe default configuration file


#fprobe can't distinguish IP packet from other (e.g. ARP)

Since neither of these probes allow you to monitor multiple interfaces I'm having to use both to monitor my wired and wireless interfaces.

Next, I configured flow-tools by editing /etc/flow-tools/flow-capture.conf with a single line:

-w /var/flows -n 275 -N 3

This stories the netflow data in the /var/flows directory and the receiver listens on which corresponds to what we had above

I found that if the directory isn't present the daemon will fail to start. This error message will show up in the logs but not on the console

When I go into work tomorrow and I plug into my dock, this should do the trick, but we'll see.

The only thing I'm not sure about is whether the daemons will correctly handled a downed interface so I may have to manually start the daemons.

Now you'll see the files are created

root@fe6400:/var/flows/2010/2010-11/2010-11-28# ls -alt | head -20

total 248
drwxr-xr-x 2 root root 4096 2010-11-28 18:10 .
-rw-r--r-- 1 root root 88 2010-11-28 18:10 tmp-v05.2010-11-28.181027-0500
-rw-r--r-- 1 root root 96 2010-11-28 18:10 ft-v05.2010-11-28.180515-0500
-rw-r--r-- 1 root root 96 2010-11-28 18:05 ft-v05.2010-11-28.180001-0500
-rw-r--r-- 1 root root 96 2010-11-28 18:00 ft-v05.2010-11-28.175448-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:54 ft-v05.2010-11-28.174935-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:49 ft-v05.2010-11-28.174422-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:44 ft-v05.2010-11-28.173909-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:39 ft-v05.2010-11-28.173356-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:33 ft-v05.2010-11-28.172843-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:28 ft-v05.2010-11-28.172330-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:23 ft-v05.2010-11-28.171816-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:18 ft-v05.2010-11-28.171304-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:13 ft-v05.2010-11-28.170751-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:07 ft-v05.2010-11-28.170237-0500
-rw-r--r-- 1 root root 96 2010-11-28 17:02 ft-v05.2010-11-28.165725-0500
-rw-r--r-- 1 root root 96 2010-11-28 16:57 ft-v05.2010-11-28.165212-0500
-rw-r--r-- 1 root root 346 2010-11-28 16:52 ft-v05.2010-11-28.164659-0500
-rw-r--r-- 1 root root 806 2010-11-28 16:46 ft-v05.2010-11-28.164146-0500

And most of these are empty. I should adjust the the rotation should it creates smaller files.

But I can see what sort of activity my laptop was up to while I was dealing with my youngest son's terrible in-between-two-and-three during supper.

root@e6400:/var/flows/2010/2010-11/2010-11-28# flow-cat ft-v05.2010-11-28.164146-0500| flow-print

srcIP dstIP prot srcPort dstPort octets packets 17 67 68 576 1 17 68 67 656 2 17 58772 53 61 1 17 53 34490 100 1 17 53 38384 51 1 17 53 39480 51 1 17 39480 53 51 1 17 34490 53 61 1 17 53304 53 60 1 17 53 34640 100 1 17 38384 53 51 1 17 53 58772 100 1 17 34640 53 61 1 17 53 38674 76 1 17 5353 5353 2611 9 17 38674 53 60 1

Sunday, November 14, 2010

Five Quick Interview Tips for Security Folks

So I've been spending 1-2 days a week for the last month or two interviewing folks for some open reqs. And yes I'm still hiring.

I've done quite a bit of interviewing (on both sides of the table) over the years in both big and small companies. In terms of the "big guys" I probably view Microsoft's process as the "gold standard." I also interviewed onsite with Amazon a while back and definitely incorporated some of what I experienced as a candidate from these west coast firms in what I do as an interviewer. Early in my days at Cisco we also did some good interviewing of candidates.

But based on recent experience, here are some things I've noticed, both in terms of tips and turn-off's. Most of these are of these are common sense.

1) Speak in concrete and specific (vs. abstract and general) terms about deliverables, responsibilities, tasks, and accomplishments. Convey a clear sense of who you are, what you like to do, and what you have accomplished. What is your career trajectory? Connect the dots for me. Even if you are "paper security" person (as opposed to a hands-on technical type) you can and should speak in specific standards, documents, and processes and data.

2) If asked about a given technology, the wrong answer is "another team did that" or "we weren't allowed to do that." Even if it is true. Find another way. This is a common problem with IT/Operational types and makes it difficult for me to envision you working in consulting, R&D, or other roles where you need to be flexible and will fill gaps where you find them.

3) Admit that you have forgotten certain technical skills if you've been "doing security" (or anything technical) for any period of time. If you say you haven't forgotten anything, you are either lying or a robot. In the long run, it is better to communicate in clear terms what you do or do not know. Plus, if you do get hired, something you claimed you were able to do (but perhaps wasn't able to be verified during the interview process, for whatever reason) may come back to haunt you as you will most certainly be passed a task.

4) When asked a seemingly factual question, the wrong answer is "I don't know" or "I could google it and find out." That is not the point. The point is to figure out what you know "around" that problem space. Plus, you are not going to get off that easy. I will take the question down a notch.

5) If asked if you can code, never, ever say you took Java/C programming in college, but haven't done any coding since then. Even if it is true. The "Modern American Poetry" classes I took are just as relevant. And resumes should err on the side of fewer skills than more skills. It makes it a pleasant surprise when you happen to know something not listed on your resume. And realistically if you put Nmap or Nessus (and most security folks do) please know what these do, because I will ask.

Monday, September 27, 2010

Hackers, Crackers, Terrorists, and Other Things You Should Care Less About

@kodefupanda: Who cares who #stuxnet target was? The takeaway is that ICS security is a prob that effects us all. We need solutions not attribution.

@taosecurity: @frednecksec Attribution is necessary if you want to deal with the threat. It's not necessary if you only want to address vulnerabilities.

@frednecksec: @taosecurity Depending on who "you" are. If you are a scada admin and are behind on the vulngame, threats are somebody elses problem.

So, for quite a while now, one of my pet peeves in any security talk/whitepaper (especially a SCADA or control system security one) is when the author has a list of bullets under a headline called threats. You know, the bad guys. Typically they throw in some cheesy clip art. Even worse they will talk about the motivation: fun, profit, curiosity, world domination, etc. I always found this annoying and irrelevant. Really, who cares why someone is attacking (attempting to exploit a vulnerability in) your system and it really does not matter who they are apart from an IP address that you may or may not be able to do traceback that you may or may not be able to report to law enforcement. You focus on what you can control. Your own networks. Your own systems. Assuming you even have the time, talent, and tools to do that.

But admittedly I have a bias here. Most of my career has focused on vulnerabilities. And most of my career, I've been focused in the technical realm. Not policy or procedures. Not politics. Not targeting the bad guys (well at least after I left tactical MI, and I only targeted bad guys in warfighter command staff exercises, never in the real world.) It has been about monitoring your assets, protecting your them, ensuring devices, applications, and other hardware/software components are properly engineered so that when they are deployed operationally, they can stand up, that you've done a reasonable job reducing the attack surface, ensuring the right set of security capabilities have been implemented, that you've thought things through. You pay lip surface to threats (attack trees, threat models, etc.) but you really are only concerned about that magical moment when a threat exploits a vulnerability. That event. The goal is to prevent that or make it as unlikely as possible, or if it does happen you want to minimize the impact.

When you are concerned about technical vulnerabilities, the capabilities or intent of the threat agents don't really matter, unless there is an intersection with the assets you are responsible for monitoring or protecting--or securing prior to deployment if you are in product security or appsec. So I learn the adversary has some new tool (malware, script, or whatever) that I can detect (or not detect) that I should attempt to monitor and recover from. There is some new way of exploiting applications or network access controls or surreptitiously gaining unauthorized access. This is why you pentest, this is why you do design reviews, this is why you do operational drills. It is really not about threats. It is about your stuff. Not their stuff or them.

So if you come from this vulnerability-centric frame of mind (or at least I think I'm accurately capturing this outmoded way of thinking about the brave new world full by Cyberwar, APT, Cyberterrorism, and what my Senator this morning referred to as "Cyber Shields") you become sort of confused when folks like Bejtlich say that this no longer matters, that that this is an outmoded approach not appropriate to the 2nd decade of the 21st century. That is all failed that we must give up and go after the Chinese dragon or the Russian bear. We must stop all we are doing. Defense no longer works.

You know, sort of like a Bush Doctrine for cyberspace. Take the fight to them.

(To me there is a difference between the fact that you have to continuously stamp out vulnerabilities, over and over, Microsoft Tuesday after Microsoft Tuesday, new application or protocol. A never ending struggle that guarantees job security for a lifetime. This might be insanity, but it is not failure, but I digress)

So the big question here is who is we. What has really changed for the system administrators, the security administrators, the firewall administrators, the folks responsible for monitoring the logs, the pentesters, the application security girls, the policy and compliance weenies. They all must suddenly switch to a threat focus?

If by we, you are talking about the intelligence community if you are talking about the military, national security policy? Absolutely. Do what you need to do--or what I assumed you were already doing. Target terrorist networks with "cyber weapons" take out critical infrastructure with your cache of 0-days SCADA (or Telcom) vulnerabilities. Just do it, Cybercommand. Or whoever.

But for the rest of us, that probably aren't doing as good a job as we should monitoring our networks, patching our systems, analyzing our logs, keeping the auditors off our backs, keeping our aging systems even running as we have to do more and more with less, we are supposed to care about who the bad guys are and going after them?

For us, I say Stuxnet and Aurora (the Google one, not the smoking, shaking generator one) change nothing.

Sunday, September 12, 2010

Altitude Induced Peace and Grown Up [Security] Jobs

So while en route from PHX to BWI, late Thursday night after a couple of (what appear to be) successful days onsite with a new-ish client, I couldn't help but a feel a bit of satisfaction, or perhaps, more importantly, lack of restlessness that has characterized much of my infosec^H^H^H^H^Hcybersecurity career. Things actually made sense. My career had some sort of meaningful trajectory. I had not just been hopping around every 18 months for the the last decade. There was a method to the madness.

(Perhaps it is no coincidence that the we've finally settled and bought a house north of Baltimore, that I wasn't on pins and needles while out of town when talking to my wife, or that the start of the school year has gone surprisingly smoothly for my two oldest children, or that my oldest's BPD, et. al. is reasonably under control, but I digress)

Around the turn of the century (and perhaps longer), I definitely had a case of the what do I do next? What is the next big thing? How can I scratch that itch?

I got my start as a trainer, which was a monkey I had on my back for a few years, only made worse that I had a B.A. in English and History from an engineering school and that my first job out of college was as an 8th grade reading teacher of all things!

So I relished the chance to leave Trident (only after my stock options from the Veridian acquisition were safely deposited) on to a all too brief stint as a security consultant at SBC (where I mostly wrote proposals and coded tools nobody would ever use) to my 5+ years at Cisco (internal consulting, then R&D, then back again) which was just as frustrating as it was rewarding. Politics. Personalities. My own naiveté. But exposure to Big Corporate life and a hell of a lot of cool technology. Then, fleeing to a small company, which was also as rewarding as it was frustrating. Mostly the working at home thing and not having enough people to work with, which was maddening, because I found out I was more social than I thought, culminating in that fateful move to Chicago for operational IT work and a hell of a lot of Ruby coding. On call. Weekend upgrade. BSD! Then back to training because I was burned out of ops work and wanted an easy way to move back East. And I actually enjoyed teaching. Adults (in the military) and kids (when I was a middle school teacher).

What triggered my thoughts at 28,000 feet (or whatever 737-700s fly at Eastbound) was recalling how natural early in the morning it was to be up at the whiteboard in the meeting I was having: drawing network diagrams, proposing solutions, debating implementations, cracking jokes. How this was just like teaching--or at least how I liked to teach. And I realized that there was no way that I could have felt this comfortable if I hadn't had the last two (no, scratch that, three) jobs.

One of the best things about teaching at Tenable (more so with the Enterprise classes than my Nessus classes) was how I'd get all sorts of weird questions based on peculiar aspects of a student's network (or political) environment. This kept me on my toes. And before then, at Hewitt, getting just enough of a taste of the keeping-shit-running hell of operational work and large, complex, global networks that makes a SCADA system seem simple. And before that all the lessons learned from Dale (who was channeling Tom Peters) about projects and clients and who you really are. So despite the fact that I still haven't stayed in one place for more than 18 months in the last two years

But the "grown-up" part in the title? What is that about?

Perhaps because I was (or still am) only a mediocre bug-hunter (or pentester or whatever, though I loath the term) but that of vulnerability sort of work I did at Cisco (and later at Digital Bond) never seemed to make a difference. And if I'm honest about it, the learning about a new product, application, protocol, or whatever was more interesting than actually finding flaws in it. Besides, you never really knew would be fixed or not--certainly was not in the critical path, was only at the tail end. Maybe things have changed now in appsec as it has gone mainstream? Maybe I got involved in that field too early. I know I got interested in fuzzers way too early. That's for damn sure.

Although I still do some of this sort of work, these days most of what I do now involves building things, not breaking things. And it is in the critical path. Secure network & system design is in the critical path. Hell, even compliance work is in the critical path.

That cool, technical work where you could wear shorts and sandals into the office every day and never had face to face meetings with your clients and never had to record time or worry about how much time you are billing, burn rate, or profitability--that was not in the critical path.

As much as I miss Austin, that is what I'll be thinking of when I head down 95 and pull into the office in the BWI flight path tomorrow morning for the first time in a week.

Thursday, September 02, 2010

A year later am I still scared? And some lessons on proposals

A year ago, when I started my current job at SAIC, I referenced a Tom Peters quote about how your projects should scare you. Well, a year later, I'm no longer scared if the new job will work out, if I will hate it, if there will be enough billable work to keep me employed--or at least from spending too much time in IR&D-land. In fact I'm not scared at all, although I'm certainly being challenged. This is a good thing, obviously.

I'm at that point where I'm at thought I would be based on my standard 18 month cycle. It is interesting how this cycle has repeated itself as I've moved from job to job. About now, things are firing on all (or maybe too many) cylinders and I have the confidence that I lacked for a while. That confidence partially comes from having gotten over that "prove yourself" hump. (One of the consequences of switching jobs is that you have to prove to yourself and others at the beginning the way you don't if you stay in the same role years and years. Of course if you have a continuing stream of new projects where you have to prove yourself to new clients, you get some of that but it is not the same level of stress)

Right now, I have too many projects (this week I billed to 6-7 different charge codes, which is way too many as there is a reason you should have only 2-3 projects at a time) and tasks and they key challenge to to delegate and distribute the load so I don't burn out.

And to say no. I will never forget how during my interview at Hewitt the to-be-CISO asked me if I could say no. I don't remember my answer, but I learned why it was necessary and I certainly see that now. And the need to disengage. Tonight, I told my boss (or at least my boss for a few more days now if the rumors are true) that I was taking the night off and my wife and I watched The Edge of Heaven. Highly recommended if a bit slow moving at first. No, I wouldn't be working on that proposal tonight.

Which leads me to the original idea for this blog. So proposals? What have I learned about proposals? Right now, I'm going through one of those frustrating periods at work were I'm doing less technical work and more proposals and pricing than I'd like. These periods are a necessary evil and they are worth the thrill of the award, but they are not fun. I thought I hated technical proposals but pricing is worse. And there are things worse than pricing, but I wont go there.

So it seems like I've done a lot of proposals in the last year. Maybe I have maybe I haven't relative to folks in similar roles in similar sized companies, but there has definitely been variety. A few technical proposals I've written from scratch (more or less) and done all the pricing and am now leading the project but most where I'm part of a larger proposal team. And this is where the idea for this blog came in mind as I was stuck in traffic on 695 this afternoon.

What are some lessons of writing projects proposals (or at least participating in proposal teams) from the last year?

Who is the boss? This seems easy but it doesn't always happen. Even if there are multiple PMs or team leads involved, you need to appoint one who will run the proposal process. Ambiguity as to who is driving the process can be a disaster. It will screw it up.

Err on the side of more conference calls and less emails. I can't believe I'm saying this. Especially if there are folks that haven't worked together before or are from different organizations. You have to build up that trust. Don't hesitate to pick up the phone. Just sending out calls to contribute and review a document isn't going to cut it.

Divide and conquer. Assign tasks to specific questions in the RFP. Yeah, the folks assigned may screw it up and somebody may need to clean it up and rescue that section, but there needs to be clarity on who has to write what.

Tell folks when things are due. When does the client need it? What the internal process hurdles? When are you tasks due. Open ended tasks also won't get done. Especially if folks are using spare cycles they need to know when to perform that context switch.

Get some sleep. Yes, since proposals are typically on top of your normal project it does require nights and weekend.

And on that note.

Monday, August 09, 2010

Finally Went Android (Or, My Favorite Apps)

So last month I finally broke down and got an Android phone. I'd my eye on them for a while. I've never had a desire to get an iPhone, mostly due to my aversion to AT&T, but also because I am not an iPhone guy just like I wasn't an Apple guy in high school (not suprisingly, I was a Kaypro and then a C-64 guy) although I've been through a couple of Mac phases over the years.

I went with a much maligned Motorola Cliq XT, mostly because it was cheap, small, and fairly rugged. That is my preference for cell phones over the years. A lot of folks complained about Motoblur but it really isn't that bad. Realistically you don't have to even use it. Delete the bubbles off your home desktop and you really won't know it is there.

The Cliq XT It is not perfect but hey it is just a phone! Plus I'm chained to my BlackBerry so it doesn't really matter if it reboots while I'm on a call. Someone can get ahold of me on that. Yeah I know it is still Android 1.5, but I don't care. In terms of usability for non-email tasks it is such an improvement over my BlackBerry. I also like the fact of having two different carriers. I had no issue with Sprint but their upgrade prices were awful and I got free activation from T-Mobile.

(Pro tip: if you are unfortunate to work in a large company at least check for their corporate discounts with cell providers. Not only do you save money but you get better service when you call in.)

So what are my favorite apps?

Seesmic. That is a no brainer. No ads. Works for high volume account. I tried the "Happenings" app for my low volume account. Forget about it.

Opera Mini - I really haven't given the built-in browser a chance. Haven't wanted to.

Easy Tether -- I really only use this as a backup but it works on Ubuntu 10.04 and Windows without issue. I did manage to do an adb shell, but never got the SOCKS proxy working.

Some Battery Widget on my desktop that allows me to toggle GPS/Wifi.

Connect Bot -- I still haven't generated public keys, but it is more useful than you would think. Still haven't figured out how to do a CTRL-C, so unless you know how, don't use "top"

Pandora -- everybody know what Pandora is. I use this to help put my toddler to sleep. Try the "Lucinda Williams" channel it works, although makes you miss Texas.

Android System Information - this barely makes the cut but at least I haven't deleted it.

Advanced Task Killer Free - I really haven't touch this much, but I think it is doing something?

Sunday, March 21, 2010


So I've been listing too much of the raucous debate in the house about health care on C-SPAN.

Politico provides the best analysis of the behind-the-scenes action in what led to where we are:
The rebirth of the reform effort is the result of a little luck, insurance company avarice, a subsiding of post-Brown panic among party incumbents and the calculation by many Hill Democrats that going small or giving up was just as politically perilous as going big.

But the main reason the bill has made it to the floor has as much to do with the complex, occasionally tense, ever-evolving partnership between the first African-American president and the first female speaker.

Publicly, the White House seemed to send a different signal each day.

In the space of two weeks, Obama or his top advisers suggested breaking the bill into smaller parts, keeping it together in one comprehensive package, putting it at the back of legislative line and needing to “punch it through” Congress, as Obama himself said at one point.

At a fundraiser in early February, Obama described the “next step” as sitting down with Republicans, Democrats and health care experts, describing a process that could take weeks, if not longer. He also seemed to acknowledge for the first time that Congress may well decide to scrap health care altogether — an admission that blunted his repeated and emphatic vows to finish the job.

Behind the scenes, Obama had, in fact, already settled on a strategy.

He would invite Republicans and Democrats to a summit, to give them one last chance at compromise, knowing they wouldn’t budge. And privately, he had decided that his favored approach was a comprehensive bill.

However, what I've been thinking about over the last few days is the similarity between how health care reform, Hillary, and McCain were handled by the Obama team. See my from October 08 where I mentioned an Andrew Sullivan article of how Obama's calm (and sometimes perceived weakness) lures the opponents into a false sense of victory and incites them. It was easy to doubt him during the primary (why wasn't he more aggressive against Clinton) and during the Summer of 08 (why did he take that stupid trip to Europe) but in all cases (assume this goes through) he pulls it off.

Will this really make a difference? I don't know and I really don't care. I knew enough to support it: the exclusions about pre-existing conditions (which impacted my family first hand) was good enough for me, although I know how much I paid for my plan at Cisco a decade ago, probably 1/5 in a comparable-sized company. As if the campaign of 2008 (the selection of Palin) didn't prove it enough, the GOP continues to show it's true colors and they don't match mine anymore. Even though I have elected more Republicans than Democrats over the years (not like you really have a choice in Texas) it is hard to know whether the GOP is really that ignorant or are they just that deceptive in order to go "downscale" Republican base that Bush built. As always, I'm a political reactionary. For me, just as my vote for Bush in 2004 was more a vote against Michael Moore and his kind, my support for this bill is just as much about giving Limbaugh and Beck the finger as passing needed reforms. And the former is a lot more certain than the latter.

Sunday, February 21, 2010

Installing CouchDB/CouchApp on 64 Bit Debian 5.x

So the manual installation for CouchApp is mostly correct, but here were some slight modifications.

NOTE: my installation assumed everything that was custom-compiled went into /opt to keep a clean segregation of anything that is part of the distribution vs. hand compiled.

1. Install erlang from source (V5.7.4). I removed the 5-6 erlang packages but still got the dependencies. I also had to add libncurses5-dev to the list of packages for Erlang to compile.

./configure --with-ssl --prefix=/opt

2. Of course now I had to use the following when building CouchDB:

./configure --prefix=/opt --with-erlang=/opt/lib/erlang/usr/include/

3. Adjust paths accordingly to /opt/var/lib/couchdb /opt/var/log/couchdb etc.

NOTE: This VZ was running on 64bit OpenVZ

Linux debian5amd64-50 2.6.26-2-openvz-amd64 #1 SMP Thu Feb 11 01:40:09 UTC 2010 x86_64 GNU/Linux

Thursday, February 18, 2010

Did the Stimulus Work?

Spending 2-3 hours on the D.C. beltway (yeah it took me that long to get from Rockville to Ellicot City tonight) does not put you in best frame of mind, but I took a break from tech stuff and catch up on some politics, for a change and actually blog instead of tweet. Apart from the campaign-style movie above, there is Judging the Stimulus by Job Data Reveals Success with the key argument as

The case against the stimulus revolves around the idea that the economy would be no worse off without it. As a Wall Street Journal opinion piece put it last year, “The resilience of the private sector following the fall 2008 panic — not the fiscal stimulus program — deserves the lion’s share of the credit for the impressive growth improvement.” In a touch of unintended irony, two of article’s three authors were listed as working at a research institution named for Herbert Hoover.

Of course, no one can be certain about what would have happened in an alternate universe without a $787 billion stimulus. But there are two main reasons to think the hard-core skeptics are misguided — above and beyond those complicated, independent economic analyses.

The first is the basic narrative that the data offer. Pick just about any area of the economy and you come across the stimulus bill’s footprints.

In the early months of last year, spending by state and local governments was falling rapidly, as was tax revenue. In the spring, tax revenue continued to drop, yet spending jumped — during the very time when state and local officials were finding out roughly how much stimulus money they would be receiving. This is the money that has kept teachers, police officers, health care workers and firefighters employed.

Then there is corporate spending. It surged in the final months of last year. Mark Zandi of (who has advised the McCain campaign and Congressional Democrats) says that the Dec. 31 expiration of a tax credit for corporate investment, which was part of the stimulus, is a big reason.

Let's hope so.

Thursday, February 04, 2010

A Maze of Twisty Fuzzers All Alike

Funny how a single innocent tweet can stir the pot. Not that I'm disappointed or that I mind, because the pot definitely needed to be stirred, but that certainly wasn't my intent on Monday. Really. But let's back up.

So I gave a short, not terribly technical presentation on Open Source fuzzing tools right before lunch at a conference on vulnerability discovery hosted by CERT at their office in Arlington. It was go to back there as I'd been to the SEI offices back in 2006 when I was working with them on the disclosure of some SCADA vulns.

Unfortunately, I didn't get to stick around the whole day (I missed Jared DeMott's presentation) and I was in and out during a conference call but there interesting talk by CERT, CERT-FI, Secunia, and Codenomicon.

But most interesting, and what led to my innocent tweet was a talk by Microsoft on how they use fuzzing and what were the results of different tools and approaches.

The conclusion I found to be surprising that they found that the use of "smart fuzzers" to have a lower ROI than the use of "dumb fuzzers" and their whitebox fuzzing platform called SAGE. Their point was the time it takes to define, model, and implement the protocol in a smart fuzzer is in most cases better spent having less skilled engineers run dumb fuzzers or white box tools.

They mentioned a talk at Blue Hat Security Briefings (I don't think this is the actual talk, but I don't have time to look for it) where they presented the bug results on a previously untested application were tested by a internally written (smart fuzzer), Peach (the dumb fuzzer?) and their whitebox fuzzing platform called SAGE. They mentioned an interesting technique of taking "major hashes" and "minor hashes" on the stack traces to isolate unique bugs. This is interesting because the primary focus has been on reducing the number of unique test cases but another approach is to look at the results. It may end up being more efficient. Of course this assumes the ability to have instrumented targets which may not always be the case, for example with embedded systems.

So Dale picked up on this and tried to apply this to the world of SCADA
We have two security vendors that are trying to sell products to the control system market: Wurldtech with their Achilles platform and Mu Dynamics with their Mu Test Suite. [FD: Wurldtech is a past Digital Bond client and advertiser] One of the features of these products is they both send a large number of malformed packets at an interface – - typically crashing protocol stacks that have ignored negative testing.
Mu responded within the comments in the blog and Wurldtech (far more defensively) on their own blog
In fact, our CTO Dr. Kube even gave a presentation at Cansecwest almost 2 years ago called “Fuzzing WTF” which was our first attempt to re-educate the community. To bolster the impact, we invited our friends at Codenomicon to help as they also were frustrated with the community discourse. The presentation can be found here.
Well I guess thie "re-education" (which sounds vaguely Maoist, I guess some of us need to be sent to a Wurldtech Fuzzing Re-education program) hasn't exactly worked although a satisfied Wurldtech customer did chime in on the Digital Bond blog. I actually agree that the need for better descriptions of fuzzing tools capabilities is needed and that was the entire point of my talk. I did a survey of the features available several dozen fuzzing tools and fuzzing frameworks that could be used to test.

I didn't spend as much time on the actual message generation as I should have and I was only focusing on Free and Open Source tools, but I identified a number of attributes for comparison such as target, execution mode, language, transport, template (generation, data model, built-in functions), fault payloads, debugging & instrumentation, and session handling. I'm not sure I completely hit my target but one of my goals was to develop some criteria to help folks make better choices on which Open Source tools could be used to most efficiently conduct robustness testing of your target. One of my conclusions (which I was pleased to hear echoed in the Microsoft talk) is that no single tool is best, no single approach is adequate--and that there are different types of fuzzing users that will require different feature sets. A QA engineer (that may have little to no security expertise) requires different features from those required for a pen-tester (or perhaps security analyst as part of a compliance-based engagement) which are still different from a hard core security researcher.

And the same applies to commercial tools you are paying tens of thousands of dollars for. One size does not fit all, regardless of the marketing (or mathematical) claims of the vendor. It would definitely be good to see a bakeoff of the leading commercial and Open Source fuzzing/protocol robustness tools similar to what Jeff Mercer has been doing for webapp scanners but I'm not optimistic that we will see that on the commercial tools because they are too expensive and the primarily customers for these tools (large vendors) are not going to disclose enough details about the vulnerabilities discovered to provide a rich enough data set for comparison.

It won't be me but perhaps some aspiring young hacker will take the time to do a thorough comparing the coverage of the tools that are out there against a reference implementation -- instead of writing yet another incomplete, poorly documented Open Soure fuzzer or fuzzing framework.

Wednesday, January 13, 2010

Hello MongoDB (Jython Style)

It has been ages since I've played around with any of the Java scripting languages so I thought I'd give Jython a spin with MongoDB. I have no idea about the performance between the pure Python vs. Java driver but it would be an interesting benchmark.

This is a very quick code snippet based on the MongoDB Java tutorial.

This was done on Ubuntu 9.10 with OpenJDK in the standard repositories and assumes the jython shell script is in your path. It also assumes the Java MongoDB driver is in your path and I was lazy so I didn't bother with CLASSPATH.

#!/usr/bin/env jython
import sys
from com.mongodb import *
print "Jython MongoDB Example"
m = Mongo("")
db = m.getDB("grid_example")

for c in db.getCollectionNames():
print c

And the output is just what you'd expect.

mfranz@karmic-t61:~/Documents/mongo$ ./
Jython MongoDB Example

Avoiding Bracket Hell in MongoDB Queries (Python Style)

To me it wasn't immediately obvious from the MongoDB Advanced Query documentation that you can string together multiple operators to perform existence, membership, and greater/than that tests. And since JSON can get very messy (and long!) and the syntax is slightly different from the Javascript in the documentation, instead of passing JSON directly to the find method of your collection pass a dictionary and assign the various conditions

For example:

myq = {}
myq["batchstamp"] = b # a timestamp
myq["modbus_tcp_reference_num"] = {"$exists": True}
cur = coll.find( myq )

Although it doesn't appear much easier than passing

{'modbus_tcp_reference_num': {'$exists': True}, 'batchstamp': 999999999}

Once start adding additional conditions (themselves which may have dictionaries it is much easier and less error prone. Trust me!

Sunday, January 10, 2010

PyMongo for Dummies (using Squid logs, again)

In my last blog I showed some examples form the MongoDB shell. Next, we'll go through the PyMongo API, since only crazy people code in JavaScript.

In [3]: c = pymongo.Connection("")
In [4]: db = c.mongosquid
In [5]: raw = db.raw
In [6]: raw
Out[6]: Collection(Database(Connection('', 27017), u'mongosquid'), u'raw')

We could have also referred to our collection as db["raw"] or db[coll] if you needed to define the collection in a variable.

In [7]: raw.count()
Out[7]: 205339

You can find out the methods that belong to the database with the collection_names() method.

In [40]: db.collection_names()
Out[40]: [u'raw', u'system.indexes']

The find_one() method allows you to quickly inspect your collection and take a peek at a sample document.

In [10]: raw.find_one()


{u'_id': ObjectId('4b496cddb15cb004a4000000'), u'format': u'-', u'method': u'GET', u'size': 824477.0, u'source': u'', u'squidcode': u'TCP_MISS/200', u'stamp': 1263096815.7609999, u'url': u''}

The distinct() method does have some limitations, as I discovered the hard way, as you an see from this exception.

In [13]: raw.distinct("stamp") --------------------------------------------------------------------------- OperationFailure Traceback (most recent call last) /root/ /usr/lib/python2.4/site-packages/pymongo-1.3-py2.4-linux-i686.egg/pymongo/collection.pyc in distinct(self, key) /usr/lib/python2.4/site-packages/pymongo-1.3-py2.4-linux-i686.egg/pymongo/cursor.pyc in distinct(self, key) /usr/lib/python2.4/site-packages/pymongo-1.3-py2.4-linux-i686.egg/pymongo/database.pyc in _command(self, command, allowable_errors, check, sock)

OperationFailure: command SON([('distinct', u'raw'), ('key', 'stamp')]) failed: assertion: distinct too big, 4mb cap

So in my previous blog (using JavaScript) I introduced queries but you really can't do anything useful without using a cursor. If you've ever done any MySQL coding before you should be familiar with the concept. Basically it allows you to iterate through the results of a query.

Here we have the same expressions but you obviously need to quote the gt in Python.

In [29]: c = raw.find( {'stamp': { "$gt": 1263096815 }})
In [31]: c.count()
Out[31]: 2060


In [23]: c = raw.find({'squidcode':'TCP_DENIED/403'})
In [24]: c.count()

Out[24]: 2999

For the sake of this exercise, we only want to see 3 results so we call the limit() method.

In [26]: c.limit(3)


Now we can iterate through the results of our query.

In [27]: for e in c:
....: print e

{u'squidcode': u'TCP_DENIED/403', u'format': u'-', u'stamp': 1262520969.721, u'source': u'', u'url': u'', u'_id': ObjectId('4b496ea4b15cb004a6000000'), u'method': u'GET', u'size': 1419.0}

{u'squidcode': u'TCP_DENIED/403', u'format': u'-', u'stamp': 1262521126.928, u'source': u'', u'url': u'', u'_id': ObjectId('4b496ea4b15cb004a600003e'), u'method': u'GET', u'size': 1395.0}

{u'squidcode': u'TCP_DENIED/403', u'format': u'-', u'stamp': 1262521127.654, u'source': u'', u'url': u'', u'_id': ObjectId('4b496ea4b15cb004a600003f'), u'method': u'GET', u'size': 1419.0}

So if we try again, what happens?

In [28]: for e in c:

print e

Nada. We have to rewind the cursor object to be able iterate again.

In [30]: c.rewind()
In [31]: for e in c:
print e ....: ....:

{u'squidcode': u'TCP_DENIED/403', u'format': u'-', u'stamp': 1262520969.721, u'source': u'', u'url': u'', u'_id': ObjectId('4b496ea4b15cb004a6000000'), u'method': u'GET', u'size': 1419.0}

You can also manually iterate through these by calling next()

In [51]:


{u'_id': ObjectId('4b496ea4b15cb004a6000000'),
u'format': u'-', u'method': u'GET', u'size': 1419.0, u'source': u'', u'squidcode': u'TCP_DENIED/403', u'stamp': 1262520969.721, u'url': u''}

In [52]: result =

Guess what, your limit will still apply so if you want to clear it you can do a cr.rewind() and cr.limit(0) and then you can manually iterate through with

Dummies Guide to MongoDB Queries using Squid Logs (JavaScript Shell Edition)

So the MongoDB develop documentation is actually pretty decent, but it doesn't really use examples with real data. For me, it made it more difficult for some of the API and shell commands to sink in.

So to generate some real world queries I created a python script that parsed the access.log file[s] generated by squid. I'll follow this blog with one that covers pymongo but I think this will be helpful, and like most of the posts will provide a good reference because when you are rapidly approaching 40 not only your eyes go, but your memory. So here goes...

First of all this assumes you are running the mongo JavaScript shell and yeah I know running from root is a bad idea and not even necessary (I don't think) but sue me.

root@opti620:~/mongodb# ./bin/mongo
MongoDB shell version: 1.2.1
url: test
connecting to: test
type "help" for help
> show dbs
> use mongosquid
switched to db mongosquid
> show collections

Now let's have some fun. This was actually when I just imported a few lines in from the log file so there are a relatively small number of documents. A collection is essentially like a table but since this is #nosql it really isn't a table. It is just collection of documents. We'll see those next.

> db.raw.find().count()
> db.raw.find()[1029]
> db.raw.find()[1028]
"_id" : ObjectId("4b496cddb15cb004a4000404"),
"squidcode" : "TCP_MISS/200",
"source" : "",
"stamp" : 1263102993.841,
"format" : "-",
"url" : "",
"method" : "CONNECT",
"size" : 17499

The JSON above is the "document." Something you'll notice is there are two different data types basically strings and floating points. The size field and timestamp are obviously floats. That hash looking thing is actually a hash or GUID that is supposedly unique.

So one of the cool built in queries is to return only the unique values for a given field. This is handled by the distinct method.

So we can see here that there were HTTP Posts.

> db.raw.distinct("method")
[ "CONNECT", "GET" ]

And because of my screwed up natting I can't tell which of my kids was going to netflix.

> db.raw.distinct("source")
[ "" ]

> db.raw.distinct("url")

So remember when I discussed types above, if we wanted to retrieve all the transactions that were greater than 1MB we could do the following, but there are obviously more to it than that.

> db.raw.find( {size: { $gt:1000000}} )
{ "_id" : ObjectId("4b496cddb15cb004a4000162"), "squidcode" : "TCP_MISS/200", "source" : "", "stamp" : 1263097489.996, "format" : "-", "url" : "", "method" : "GET", "size" : 1008478 }
{ "_id" : ObjectId("4b496cddb15cb004a40003b0"), "squidcode" : "TCP_MISS/200", "source" : "", "stamp" : 1263099100.207, "format" : "-", "url" : "", "method" : "GET", "size" : 1008478 }

I was pleased to find that you can use regular expressions. The first query tells me there are 3199 documents that have port 443 in them and the 2nd query returns the first document. One of the things I noticed is that retrieving the document based on the "index" is really really slow. But I believe that is because it isn't really an index, but we'll get to them later.

> db.raw.find ( { url: /:443/ }).count()
> db.raw.find ( { url: /:443/ })[0]
"_id" : ObjectId("4b496cddb15cb004a4000093"),
"squidcode" : "TCP_MISS/200",
"source" : "",
"stamp" : 1263096929.091,
"format" : "-",
"url" : "",
"method" : "CONNECT",
"size" : 96222
> db.raw.find ( { url: /:443/ })[0:3]
Sun Jan 10 01:16:11 JS Error: SyntaxError: missing ] in index expression (shell):0

You'll notice that array slices don't work, but they do in Python, obviously which I'll blog on next.

Saturday, January 09, 2010

FreeBSD 8.0 with rum0 and wpa_supplicant on Lenovo S10-2

It looks like the driver for rum has changed slightly in FreeBSD 8.0 from FreeBSD 7.2 because I was not able to use the same command-line syntax as I did previously. Basically the only thing different I did was the ifconfig wlan create...

I had this card running on old Dell Optiplex acting as a bridge for my kids network (and they were watching a lot of streaming media) and I was surprisingly impressed with it. Decent performance.

ugen4.3: at usbus4
rum0: on usbus4
rum0: MAC/BBP RT2573 (rev 0x2573a), RF RT2528

mfranz-bsd8# cat /etc/wpa_supplicant.conf

mfranz-bsd8# ifconfig wlan create wlandev rum0
mfranz-bsd8# ifconfig wlan0
wlan0: flags=8802 metric 0 mtu 1500
ether 00:1c:10:e6:1a:02
media: IEEE 802.11 Wireless Ethernet autoselect (autoselect)
status: no carrier
ssid "" channel 1 (2412 Mhz 11b)
country US authmode OPEN privacy OFF txpower 0 bmiss 7 scanvalid 60
bgscan bgscanintvl 300 bgscanidle 250 roam:rssi 7 roam:rate 1
bintval 0

mfranz-bsd8# wpa_supplicant -c /etc/wpa_supplicant.conf -i wlan0
Trying to associate with xxxxxxxxxx (SSID='xxxxxxxx' freq=2437 MHz)
Associated with xxxxxxxxxxx
WPA: Key negotiation completed with xxxxxxxxxxx [PTK=CCMP GTK=TKIP]
CTRL-EVENT-CONNECTED - Connection to xxxxxxxxxx completed (auth) [id=0 id_str=]

And while I'm at it, I hadn't seen any who actually installed 8.0 on a Lenovo Netbook but so far so good. I've got X working (I'll blog on that later) and re seems to work well enough. Obviously the Broadcom 4312's aren't going to work, but if you have USB wifi card or a tether you will be ok.

Next step see if I can get my Novatel u727 card working. I suspect it should work just fine, because it worked well on OpenBSD, but you never know...

Copyright (c) 1992-2009 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 8.0-RELEASE #0: Sat Nov 21 15:48:17 UTC 2009
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel(R) Atom(TM) CPU N270 @ 1.60GHz (1602.40-MHz 686-class CPU)
Origin = "GenuineIntel" Id = 0x106c2 Stepping = 2
AMD Features2=0x1
TSC: P-state invariant
real memory = 1073741824 (1024 MB)
avail memory = 1026433024 (978 MB)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 1 core(s) x 2 HTT threads
cpu0 (BSP): APIC ID: 0
cpu1 (AP/HT): APIC ID: 1
ioapic0: Changing APIC ID to 4
ioapic0 irqs 0-23 on motherboard
kbd1 at kbdmux0
acpi0: on motherboard
acpi0: [ITHREAD]
acpi0: Power Button (fixed)
Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000
acpi_timer0: <24-bit> port 0x408-0x40b on acpi0
acpi_ec0: port 0x62,0x66 on acpi0
acpi_hpet0: iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 900
acpi_button0: on acpi0
acpi_lid0: on acpi0
acpi_button1: on acpi0
pcib0: port 0xcf8-0xcff on acpi0
pci0: on pcib0
vgapci0: port 0x60f0-0x60f7 mem 0x58280000-0x582fffff,0x40000000-0x4fffffff,0x58300000-0x5833ffff irq 16
at device 2.0 on pci0
agp0: on vgapci0
agp0: detected 7932k stolen memory
agp0: aperture size is 256M
vgapci1: mem 0x58200000-0x5827ffff at device 2.1 on pci0
pci0: at device 27.0 (no driver attached)
pcib1: at device 28.0 on pci0
pci1: on pcib1
pcib2: at device 28.1 on pci0
pci2: on pcib2
pci2: at device 0.0 (no driver attached)
pcib3: at device 28.2 on pci0
pci3: on pcib3
re0: port 0x2000-0x20ff mem 0x52010000-0x52010fff,0x52000000-0x5200ffff irq 18 at
device 0.0 on pci3
re0: Using 1 MSI messages
re0: Chip rev. 0x24800000
re0: MAC rev. 0x00400000
miibus0: on re0
rlphy0: PHY 1 on miibus0
rlphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
re0: Ethernet address: 00:26:22:0b:07:28
re0: [FILTER]
pcib4: at device 28.3 on pci0
pci4: on pcib4
uhci0: port 0x60a0-0x60bf irq 16 at device 29.0 on pci0
uhci0: [ITHREAD]
uhci0: LegSup = 0x0f00
usbus0: on uhci0
uhci1: port 0x6080-0x609f irq 17 at device 29.1 on pci0
uhci1: [ITHREAD]
uhci1: LegSup = 0x0f00
usbus1: on uhci1
uhci2: port 0x6060-0x607f irq 18 at device 29.2 on pci0
uhci2: [ITHREAD]
uhci2: LegSup = 0x0f00
usbus2: on uhci2
uhci3: port 0x6040-0x605f irq 19 at device 29.3 on pci0
uhci3: [ITHREAD]
uhci3: LegSup = 0x0f00
usbus3: on uhci3
ehci0: mem 0x58344400-0x583447ff irq 16 at device 29.7 on pci0
ehci0: [ITHREAD]
usbus4: EHCI version 1.0
usbus4: on ehci0
pcib5: at device 30.0 on pci0
pci5: on pcib5
isab0: at device 31.0 on pci0
isa0: on isab0
atapci0: port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x60c0-0x60cf irq 16 at device 31.1 on pci0
ata0: on atapci0
ata0: [ITHREAD]
atapci1: port 0x60d8-0x60df,0x60fc-0x60ff,0x60d0-0x60d7,0x60f8-0x60fb,0x6020-0x602f mem 0x583440
00-0x583443ff irq 17 at device 31.2 on pci0
atapci1: [ITHREAD]
atapci1: AHCI called from vendor specific driver
atapci1: AHCI v1.10 controller with 4 1.5Gbps ports, PM not supported
ata2: on atapci1
ata2: [ITHREAD]
ata3: on atapci1
ata3sm0: irq 12 on atkbdc0
psm0: [ITHREAD]
psm0: model Generic PS/2 mouse, device ID 0
cpu0: on acpi0
est0: on cpu0
p4tcc0: on cpu0
cpu1: on acpi0
est1: on cpu1
p4tcc1: on cpu1
pmtimer0 on isa0
orm0: at iomem 0xcf000-0xcffff pnpid ORM0000 on isa0
sc0: at flags 0x100 on isa0
sc0: VGA <16 flags="0x300">
vga0: at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
ppc0: parallel port not found.
Timecounters tick every 1.000 msec
usbus0: 12Mbps Full Speed USB v1.0
usbus1: 12Mbps Full Speed USB v1.0
usbus2: 12Mbps Full Speed USB v1.0
usbus3: 12Mbps Full Speed USB v1.0
usbus4: 480Mbps High Speed USB v2.0
ad4: 152627MB at ata2-master SATA150
ugen0.1: at usbus0
uhub0: on usbus0
ugen1.1: at usbus1
uhub1: on usbus1
ugen2.1: at usbus2
uhub2: on usbus2
ugen3.1: at usbus3
uhub3: on usbus3
ugen4.1: at usbus4
uhub4: on usbus4
GEOM: ad4: partition 1 does not start on a track boundary.
GEOM: ad4: partition 1 does not end on a track boundary.
uhub0: 2 ports with 2 removable, self powered
uhub1: 2 ports with 2 removable, self powered
uhub2: 2 ports with 2 removable, self powered
uhub3: 2 ports with 2 removable, self powered
Root mount waiting for: usbus4
Root mount waiting for: usbus4
Root mount waiting for: usbus4
uhub4: 8 ports with 8 removable, self powered
Root mount waiting for: usbus4
Root mount waiting for: usbus4
ugen4.2: at usbus4
Trying to mount root from ufs:/dev/ad4s2a
ugen0.2: at usbus0
ums0: on usbus0
ums0: 2 buttons and [XY] coordinates ID=0
drm0: on vgapci0
vgapci0: child drm0 requested pci_enable_busmaster
info: [drm] AGP at 0x40000000 256MB
info: [drm] Initialized i915 1.6.0 20080730

Thursday, January 07, 2010

Some Shallow & Superficial Reasons for Picking MongoDB for your [web]app

So first got turned on to #nosql databases a little over (or under) a year ago with CouchDB but lately I've been quite enamored with MongoDB as of late.

So forgot about deep architectural reasons for using it. Here are some quite practical some practical reasons, when you are a not full-time developer (or database guru) but you find yourself doing development that involves a data store and the thought of using MySQL (so like 2000s) in your app:
  • Abhorrence for schemas, ORMs, and migrations - this is basically the laziness argument. Basically I want/need to store stuff. And the stuff I want to store might change and I don't want to have to deal with changing the schema (and my) app to adapt to those changes. This was document oriented databases like CouchDB and MySQL rule. If everything is a JSON object it finds a great place for you to store stuff.
  • Ease of Installation & Compilation -- yep CouchDB has been in the latest Ubuntu repos for a while, but I use Lenny/Hardy server side, so forget about it. Dealing with Erlang (and finding all the dependencies to build SpiderMonkey was a big pain) the ass. Beam, what the hell is beam? Mongo has 32/64 bit Linux binaries that just work and a briefly managed to get it to compile on FreeBSD 7.2. And unlike some of the others out there it doesn't require require a JRE.
  • Map/Reduce hurts my head - ease of use is one of the key differentiators between Mongo and CouchDB is that is the simplicity of queries. I'm not an expert yet, but having to create Map/Reduce functions to create views to get at your data, it was a slippery concept for me.
  • Non-HTTP Transport -- unlike CouchDB, Mongo has a binary client/server protocol and doesn't used HTTP.
There are also some really cool features like capped collection that should be useful for the app I'm working on, but these were some of the reasons why I went with Mongo. Back to coding...

Tuesday, January 05, 2010

Pansy or Victim?

So unfortunately some who I [used to] follow over on @frednecksec cited an article over on which allowed me to check out the cool sponsors such as the one pictured above but don't forget Silverlungs.

To each his own, I inhaling gaseous gold myself. Much better preparation for the "End Times," the "New World" or whatever the "elites" have in store for us.