Showing posts with label Two-O. Show all posts
Showing posts with label Two-O. Show all posts

Wednesday, December 09, 2009

WebSocket Service Fingerprinting with Curl

Fingerprinting is probably a bit of a stretch, but at least I didn't use the "h" word, but using pywebsocket is probably the easiest way to learn about the protocol.

Startup the server....

franz@mfranz-s10-2:~/Documents/pywebsocket-read-only/src/mod_pywebsocket$ python standalone.py -p 8888 -w ../example/

Then the client...

mfranz@mfranz-s10-2:~/Documents/pywebsocket-read-only/src/example$ python echo_client.py -s 127.0.0.1 -p 8888
Send: Hello
Recv: Hello
Send: 日本
Recv: 日本
Send: Goodbye
Recv: Goodbye

Look at the traffic on the wire with ngrep.
interface: lo (127.0.0.0/255.0.0.0)
####
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
GET /echo HTTP/1.1..
##
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
Upgrade: WebSocket..
##
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
Connection: Upgrade..
##
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
Host: 127.0.0.1:8888..
##
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
Origin: http://localhost/..
##
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
..
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
HTTP/1.1 101 Web Socket Protocol Handshake..
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
Upgrade: WebSocket..
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
Connection: Upgrade..
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
WebSocket-Origin:
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
http://localhost/
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
..
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
WebSocket-Location:
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
ws://127.0.0.1:8888/echo
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
..
##
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
..
##
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
.Hello.
#
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
.Hello.
#
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
........
#
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
........
#
T 127.0.0.1:44284 -> 127.0.0.1:8888 [AP]
.Goodbye.
#
T 127.0.0.1:8888 -> 127.0.0.1:44284 [AP]
.Goodbye.
###

Now with curl, notice the headers that you have to add to get a response. With anything less I got a 404. The origin header can be anything.

mfranz@mfranz-s10-2:~$ curl -v http://127.0.0.1:8888/echo -H "Upgrade: WebSocket" -H "Connection: Upgrade" -H "Origin: http://localhost"


* About to connect() to 127.0.0.1 port 8888 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#0)
> GET /echo HTTP/1.1
> User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
> Host: 127.0.0.1:8888
> Accept: */*
> Upgrade: WebSocket
> Connection: Upgrade
> Origin: http://localhost
>
<>
But if the URI doesn't match you get

mfranz@mfranz-s10-2:~$ curl -v http://127.0.0.1:8888/ -H "Upgrade: WebSocket" -H "Connection: Upgrade" -H "Origin: http://localhost"


* About to connect() to 127.0.0.1 port 8888 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
> Host: 127.0.0.1:8888
> Accept: */*
> Upgrade: WebSocket
> Connection: Upgrade
> Origin: http://localhost
>
* Empty reply from server
* Connection #0 to host 127.0.0.1 left intact
curl: (52) Empty reply from server
* Closing connection #0


Sunday, February 22, 2009

Personal or Professional (or, why one Twitter account is Not Enough?)

So I just hit my 400th tweet on @mdfranz but am scarcely up to 20 @frednecksec and I've learned a few things about how I like to use this addictive service in the past few months. And yeah (if you went there) my updates are protected, but more or that later.

When I was first started following people, I was annoyed by technical people (whose blogs I read or knew personally) that only tweeted about personal stuff, so I didn't follow them. I could give a shit about what what sort of decadent food they were cooking, what they were doing with their wife, or their kids accomplishments. But I was interested in 140 characters of wisdom on some technical/technology topic. If there was at least a 50:50 ratio of personal to professional context I kept following, otherwise I dropped them.

Personal Branding
As Tom Peters would say, "you are your customers." Your personal brand is reflected in the those that you do business with and those that do business with you. The same applies to you twitter followers and folks you tweet with. If people that follow you tweet about stupid shit (to put it crudely, but probably characterizes some large % of tweets) that reflects poorly on you, since one of the first things I do when I follow someone (or someone follows me) is I check out the people they follow and their followers. It is the same principle as only connecting with "people you trust" on LinkedIn. On my public account I'm more open to follow somebody I don't know well enough or let anybody follow me, including spambots. But on my private account I approve all followers.

Privacy
Frankly, a lot of stuff you tweet on has no business on public Internet (and all the various bots that follow you) where you shop, what you eat, the activities you do with your family, where you are geographically is none of the damn business of people that you don't really know, let alone twitter's public timeline. This is why I protect my updates on @mdfranz but don't on @frednecksec. Several weeks ago I registered for a demo version of some webapp and a product manager/sales person started following me. Creepy. I don't want sales people following me. And during the inauguration I wondered about how well Sprint's EVDO network would hold up and I had somebody in customer server ping me. She was nice/professional enough but I don't want that sort of interaction. I also don't want people I don't know to where I frequent.

Different Media for Different Messages
I've found that there are also two kinds of tweets: those personal, biased observations, and more objective factual statements that answer the original twitter question, "what are you doing?" More specifically what I'm am I reading that might be of interest to my readers. More reflective, opinionated tweets go on my personal account while the others (especially that are narrowly security related) go on my public account. This is the reason I've moved most of my high volume twitter lists (that mostly shared links and article) over to my public account. Public content stays public, private content stays private and I can also see on my public account when something I've read about, seen has already been tweeted on. I think RT is lame since the whole point is to post original content or content that reflects a certain perspective or range of interests.

So what Twitter client allows you to use multiple accounts at once, twhirl. Or use multiple browsers which is generally a good idea.

Sunday, February 15, 2009

Twitter / FredneckSec Updates




For better or worse, I'm now up to 2 twitter accounts, having created @frednecksec with the goal of trying (once again) to form a Security networking group in the Frederick area along the lines of CharmSec or NoVA Sec except for us country folks that live too far out to make it into (or stick around after work) to the DC/Baltimore area.

Yeah, so this is definitely cutting into my blogging. Apart from a regional focus I hope to tweet on stuff you won't see elsewhere on any of the twitter, even if it tends to border on the obscure.

FredneckSec was something a couple of us (unsuccessfully) tried to do last Summer but am hoping with power of twitter and some new folks I've met here in the New Market area to tried to get this rolling again real soon now.

Tuesday, January 27, 2009

Is jennydddggeee too hot for you? (or, Automated Twitter Spam Blocking?)



If you are reading this blog, you don't know anyone like this, don't want to know anyone that looks like that -- and certainly don't want either of them following your every move.

So it should be pretty easy to write less than 25 lines of Python using Twyt that automatically removes any followers that have a single post.

But there have to be tools that already do this. Or any Twitter clients that will automatically block spam followers.

Wednesday, January 14, 2009

Inside the Gmail Login Sequence (or, has anyone documented all the parameters and JSON response codes)

I generally don't like Wiley books, but near the end of a chapter on how Gmail works actually isn't that bad.

I'm sure there has to be more stuff like

/gmail?
ik=344af70c5d
&view=cv
&search=inbox
&th=101865c04ac2427f
&lvp=-1
&cvp=0
&zx=9m4966e44e98uu

As you can see, this the message ID of the message I clicked on.
But the others are mysterious at the moment. At this point in the
proceedings, alarms went off in my head.Why, I was thinking, is
the variable for message ID th—when that probably stands for thread.
So, I sent a few mails back and forth to create a thread, and loaded
the Inbox and the message back up

elsewhere dissecting the URL parameters, but I haven't found it apart from looking at the libgmail source, the constants file in particular. Has nobody documented this stuff or is google burying any documentation on reverse engineering Gmail?

It is sort of curious that the author is using tcpflow. Fine tool, but using an interceptor proxy like paros or something like firebug is a hell of a lot more efficient than sniffing.

Saturday, January 03, 2009

Hello CouchDB (or, does Erlang make it Cool?)


Today I ran across CouchDB in Ten reasons why CouchDB is better than MySQL (provides a high level overview but not terribly interesting) and a more interesting discussion among a bunch of database guys (which I'm obviously not,) about what sort of problems this (and similar approaches) are most suited for. I must say after having played around with ORM (ActiveRecord, Django, Hibernate) over the years and more recently had my nose is Moodle databases I'm sympathetic to the idea that maybe not everything should be stored in tables, rows, and fields and having to design (or discern) the relations. There is just something sort of contorted about the process of viewing the world (or the data we are trying to capture in the world) this way. I have also definitely felt the pain of having to adjust your schema (as you realize new requirements) and perform "migrations" so there is definitely something intriguing about CouchDB.

Some of the aspects I found interesting


In an SQL database, as needs evolve the schema and storage of the existing data must be updated. This often causes problems as new needs arise that simply weren’t anticipated in the initial database designs, and makes distributed “upgrades” a problem for every host that needs to go through a schema update.


and


CouchDB is a peer based distributed database system. Any number of CouchDB hosts (servers and offline-clients) can have independent “replica copies” of the same database, where applications have full database interactivity (query, add, edit, delete). When back online or on a schedule, database changes are replicated bi-directionally.


And there is also OReilly book in progress that provides a much more readable introduction and Standalone Applications with CouchDB is also definitely worth reading.

(Oh and on Ubuntu 8.10 it is in the repo so it is an "apt"-get away)

Saturday, November 22, 2008

Qore: your new webappsec buddy?

Now I don't do webappsec anymore but if I did, I would I would investigate qore.

Why?

The areas Qore targets are interfacing, database integration, threading (and SMP scalability) and embedding (and arbitrarily restricting) code. Qore is also a dynamically-typed language to facilitate rapid prototyping and development (particularly regarding agile programming, disposable interfaces, etc). To my knowledge there is no other programming language with this design focus.

You can get a feeling for this aspect of Qore's design when programming with Qore's database-independent DBI infrastructure (through the Datasource and DatasourcePool classes), Qore's XML and JSON integration (where XML and JSON strings and qore data structures can be converted from one to the other), easy use of the Socket class and classes provided by modules providing messaging integration, etc.


It is a bit too Perlish for my taste (damn you semi-colons!) but it looks sort of interesting and I'll probably play around with it.

root@ubuntu-ve804:~# qore --version
QORE for Linux unknown (32-bit build), Copyright (C) 2003 - 2008 David Nichols
version 0.7.1-2304 (builtin features: sql, threads, xml, debug)
module API: 0.5
build host: Linux localhost 2.6.24-21-openvz #1 SMP Wed Oct 22 02:50:53 UTC 20
08 i686 GNU/Linux
C++ compiler: g++
CFLAGS: -I/usr/include/libxml2 -D_GNU_SOURCE -D_QORE_LIB_INTERN -DMODULE_DIR="
/usr/local/lib/qore-modules" -g -g -m32 -D_THREAD_SAFE -Wall -lm
LDFLAGS: -lz -lpcre -lxml2 -lbz2 -lssl -lcrypto -g -lm
this build has options:
OPTION atomic operations = true
OPTION stack guard = true
OPTION library debugging = true
OPTION runtime stack tracing = true
ALGORITHM openssl sha224 = true
ALGORITHM openssl sha256 = true
ALGORITHM openssl sha384 = true
ALGORITHM openssl sha512 = true
ALGORITHM openssl mdc2 = false
ALGORITHM openssl rc5 = false
FUNCTION round() = true
FUNCTION timegm() = true
FUNCTION seteuid() = true
FUNCTION setegid() = true
FUNCTION parseXMLWithSchema() = true

Monday, November 17, 2008

Nice Blog on eLearning Course Design

Although I'm generally ambivalent about the whole idea of eLearning 2.0 Tony Karrer has quite a few practical tips on eLearning design regardless of the version of your eLearning.

I found the breakdown of specific types of users (and the implication that you must design activities for each) quite helpful

Spectator / Joiner / Creator Levels of Participation

One of the best decisions we made early in the design of the course was to define different levels of participation in the course. Here's how we defined it:

Each week we will share new activities that will allow you to explore each of these tools. We recognize that there will be differences in interest, experience and time available for exploration, so these activities will be designed to give you meaningful experiences at different levels:

* The Spectator--These will be exercises or activities that should take approximately 15 minutes to complete. The Spectator level is for people who want just a quick exploration of the tools and minimal interaction.

* The Joiner/Collector--For those who want to delve more deeply into a particular Web 2.0 tool, the Joiner/Collector level will consist of activities that take approximately 30 minutes to complete.

* The Creator--These activities are for people who want to really spend some time exploring and trying out a particular tool or set of tools. The activities will take approximately 75 minutes to complete and will allow you to immerse yourself in the Web 2. 0 experience.


Of course I think this breakdown works if you delete all the references to web 2.0.

And I think these different styles of learning/levels of engagement actually apply to the [Instructor Led] classroom as well.

Thursday, November 06, 2008

Web 2.0 Security You Can't Believe In

From Obama, McCain campaigns' computers hacked for policy data.

Obama is PHP (the horror, the horror). McCain in ASP.


As described by a Newsweek reporter with special access while working on a post-campaign special, workers in Obama's headquarters first detected what they thought was a computer virus that was trying to obtain users' personal information.

The next day, agents from the FBI and Secret Service came to the office and said, "You have a problem way bigger than what you understand ... you have been compromised, and a serious amount of files have been loaded off your system."

One of the sources told CNN the hacking into the McCain campaign computers occurred around the same time as the breach into those of Obama's campaign.

Representatives of the campaigns could not be reached for comment on the matter

Tuesday, September 02, 2008

Sunday, October 21, 2007

Saturday, October 13, 2007

Kulturkampf 2.0



Believe it or not, I'm actually glad that I had so little exposure to the Internet while I was in college. It was only my last year (in 1993, as a struggled to complete the inane requirements for teacher certification in Texas) that I ran across gopher, usenet, and lynx (that was what I would telnet up to in Kansas from my VAX account at Texas A&M, right?)

Most of my friends (including my soon to be wife) were grad students in the A&M English Department. Many were embroiled in debates about the Culture Wars of late 80s and early 90s. Critical theory, multiculturalism, Post-Modernism, Post-Colonial Literature, Foucault, Derrida, the Canon, the Body, The Border. Critical Pedagogy. The flattening of hierarchy, the collapse of high and low culture, the end of the authority, the decimation of institutions, the horrific lack of standards, decent into moral chaos, etc. ad nauseum.

This is what I was exposed to in my upper level English and History classes. This is what we debated and argued. I remember attending a speech Dinesh D'Souza who was denounced (yes, denounced in the Maoist sense) by several African-American students in the shrill terms as a racist. But as a white male, middle class, Liberal Arts major in at an majority engineering school (adding insult to injury the English department shared the same building with the business school, the horror!) who no clue what I wanted "to do" let alone "how to do it" -- I felt like a persecuted minority. Put off reality by going to grad school. In what? Apply for the that MFA program in creative writing? Could I get into the Iowa Writer's work shop. Probably not.

It all seems so trite now (ah, to return to the naivety of age 22, although I remain a reactionary still) as does an Interview with Andrew Keen (the author of The Cult of the Amateur: How Today's Internet is Killing Our Culture) spurred this nostalgic blog entry.

The review in Academic Commons was the most compelling and begins with:
Andrew Keen insists he is neither anti-technology nor anti-progress. Yet this veteran of the dot com era begins his recent book, The Cult of the Amateur (Doubleday/Currency, 2007), sounding much like a high-culture snob pooh-poohing the vulgar masses for having appropriated the Web as their own and, in the process, wreaking potential destruction on our economy, culture and values. Keen's polemic hints less at neo-Luddite dissent than at an underlying bitterness and resentment--at his own gullibility at having been so easily sucked into the Internet dream, and also at those who have taken the technology out of the hands of professionals like himself ("I almost became rich" [p. 11], he confesses in the beginning of the first chapter). Drawing on 19th-century evolutionary biologist T. H. Huxley's "infinite monkey theory," Keen fears what lies ahead when the masses are empowered with far-reaching technology. As the author describes it, Huxley's theorem asserts that if infinite monkeys are provided with infinite typewriters, one of these monkeys will eventually create a masterpiece. Keen updates and reverses the theorem, replacing monkeys with humans and typewriters with networked personal computers; and "instead of creating masterpieces, these millions and millions of exuberant monkeys--many with no more talent than our primate cousins--are creating an endless digital forest of mediocrity" (pp. 2-3). By the end of the introduction, a reader would have just cause to feel a bit insulted

And definitely better than the one in the NY Times
Mr. Keen argues that “what the Web 2.0 revolution is really delivering is superficial observations of the world around us rather than deep analysis, shrill opinion rather than considered judgment.” In his view Web 2.0 is changing the cultural landscape and not for the better. By undermining mainstream media and intellectual property rights, he says, it is creating a world in which we will “live to see the bulk of our music coming from amateur garage bands, our movies and television from glorified YouTubes, and our news made up of hyperactive celebrity gossip, served up as mere dressing for advertising.” This is what happens, he suggests, “when ignorance meets egoism meets bad taste meets mob rule.”

Whether or not this depiction is true (and there certainly have been critiques of his facts) which is different question from whether or not this development (some of which is obviously the case) is a bad thing -- this critique seems strangely naive, ignorant of history and recent philosophy.

How many times in past cultural changes/wars have we heard these same arguments?

However it is curious, that the most interesting technological trends (and many such as Free/Open Source software, the ultimate amateur endeavor) of the day seem to be an ultimate fulfillment the prophecies of the postmodern theory I was reading 15 years ago.

If only I had known.

Can't Sleep? Read about Scaling Web Apps

If, you too, happened to have watched Knocked Up tonight and can't sleep (and no I haven't been reading the "baby books" but I probably should be), you might try reading this fairly vacuous article/discussion called Why most large-scale Web Sites are not written in Java but it lead me to High Scalability which actually pretty interesting and reminded me of the cool Joyeunt Prezo from RailsConf I ran into a while back.

Thursday, September 20, 2007

GNUCITIZEN: I liked you back when you were a temp!


About a year ago I started following GNUCITIZEN (back when it was just PDP) because the graphics were cool and there was interesting content like running Jython within your browser and even the AttackAPI.

But things started to get less and less interesting as GNUCITIZEN hit the Web 2.0 Security warpath--and other folks started blogging besides PDP. Then came the Firefox vuln (yeah the one you just updated for) and then today's pre-disclosure of a Acrobat 0-day.

The site is certainly on a downward trajectory and it was with a certain sadness that comes this time of the year [in North America when you know the days are getting shorter] when I read the profound advice not to open any PDF's. Another non-actionable disclosure. If you are going to pre-disclose (which I disagree with, but fine!) at least provide something useful, like a PoC. Otherwise, what is the point? A site that had the potential to be something interesting and off-beat like lcamtuf has devolved in to banal disclosure posturing. And we certainly could use a lot less of that.

Oh but it looks like the site is now down, so its not a total loss.

Sunday, July 22, 2007

Sunbrid Recant (or searching for a browser-based iCal Replacement)

So once again I'll recant on on a previous blog.

Sunbird is crap. Events mysteriously get created and can't be deleted. Or get deleted, period. Or something weird happens with the interface. It never crashed, though. But it was annoying enough to stick with iCal.

So what I really want is a entirely browser-based iCal/Google Calendar-like tool (that means Javascript, and I'm not a Javascript programmer) tool that allows me to:

  • Drag (and eventually drop) activity across a daily schedule for stuff I work on.
  • Remember/autosuggest project names
  • Export events to some standard format, iCal or XML, YAML, or whatever
  • Summarize project activity by week/month/totoal

    Yeah this probably could be done in Rails/Django but I don't want that. No databases. No webservers, but still browser based. Am I crazy? Creating a Dojo Calendar has promise, but it requires server side code. What I want is something self contained like TiddlyWiki. Where the data is all stored in the .js and can be moved around and modified.

    Since I'm obviously in over my head (what else is new) this is obviously something I wouldn't want to start from scratch, but what next? Should I look at GWT or better yet pyjamas to avoid Java development? Obviously this would be (relatively) trivial to do as a traditional desktop GUI app but that is no fun.
    The key is needs to be portable, lightweight, usable off line. Somebody else had to have run across this sort of problem (and solved it) before. We'll see what happens.
  • Tuesday, May 08, 2007

    Drupal/PHP You Win for the night

    Last three hours battling Drupal on Dapper (PHP 5.1) and the dreaded Access Denied Error after you created your admin. I swear I had this working a month ago.

    This doesn't work
    root@karlov:/var/www# dpkg -l | grep php
    ii libapache2-mod-php5 5.1.2-1ubuntu3.7 server-side, HTML-embedded scripting languag
    ii php5-common 5.1.2-1ubuntu3.7 Common files for packages built from the php
    ii php5-gd 5.1.2-1ubuntu3.7 GD module for php5
    ii php5-mysql 5.1.2-1ubuntu3.7 MySQL module for php5
    ii php5-mysqli 5.1.2-1ubuntu3.7 MySQL Improved module for php5
    root@karlov:/var/www#

    This still did!
    root@ubuntu:/var/www# dpkg -l | grep php
    ii libapache2-mod-php5 5.1.2-1ubuntu3.6 server-side, HTML-embedded scripting languag
    ii php5-common 5.1.2-1ubuntu3.6 Common files for packages built from the php
    ii php5-gd 5.1.2-1ubuntu3.6 GD module for php5
    ii php5-mysql 5.1.2-1ubuntu3.6 MySQL module for php5
    ii php5-mysqli 5.1.2-1ubuntu3.6


    Nah, the security update didn't break something...

    Damn you PHP, damn you!

    Saturday, March 24, 2007

    Fun with Introspection on Public XML-RPC Servers

    Somewhere I managed to find a list that someone had compile of sites that support XML-RPC (this is mostly for blog pingers) so for grins I tried iterating through them with system.listmethods() as I did for Python and Ruby previously. I guess I shouldn't have been surprised that so many had introspection enabled. The results?

    A sampling of the server names that were returned:
    • Apache/1.3.33 (Debian GNU/Linux) PHP/4.3.10-18
    • Apache/1.3.29 (Unix) PHP/4.3.7
    • Apache/1.3.34 (Unix) mod_fastcgi/2.4.2 mod_ssl/2.8.25 OpenSSL/0.9.7e PHP/4.4.4 FrontPage/5.0.2.2510
    • Apache XML-RPC 1.0
    • Apache/2.0.55 (Ubuntu) PHP/5.1.2
    • psfe
    • Apache/2.0.52 (CentOS), X-Powered-By: PHP/5.1.6
    • Apache, X-Powered-By: PHP/4.4.2
    • SOAP::Lite/Perl/0.60
    • Apache Coyote 1.1

    A lot of Apache servers (that didn't ID the XML-RPC implementation) returned message and flerror as valid methods. And the "expected result" for many others returned as little as:
    • system.listMethods
    • system.methodSignature
    • system.methodHelp
    • system.multicall
    • weblogUpdates.ping
    • weblogUpdates.extendedPing

    While the juiciest spit out:
    • syndic8.GetFeedCount
    • syndic8.GetLastFeed
    • syndic8.FindFeeds
    • syndic8.QueryFeeds
    • syndic8.FindSites
    • syndic8.GetFeedInfo
    • syndic8.FindUsers
    • syndic8.GetUserInfo
    • syndic8.SuggestDataURL
    • syndic8.SuggestSiteURL
    • syndic8.GetLicenses
    • syndic8.CreateSubscriptionListFromOPML
    • syndic8.CreateSubscriptionListFromHTML
    • weblogUpdates.Ping
    • weblogUpdates.ping
    And a whole lot more...

    Some the servers that did not return any method names happily returned that system object wasn't present:

    java.lang.Exception: RPC handler object "system" not found and no default handler registered

    Can't evaluate the
    expression because the name "system.listMethods" hasn't been defined.

    Failed to access class (system): Perl v65280.0.0 required
    (did you mean v65280.000?)--this is only v5.8.6,
    stopped at (eval 119) line 1.\n

    Is this a shock? No. Are these information disclosures the the end of the world? Certainly not. There are most likely all public API meant to be exposed to the world. What concerns me is the relatively small number of XML-RPC vulnerabilities that have been disclosed so far (CVE 2005-0089, CVE 2005-1921, CVE 2005-2498, CVE-2005-1992). I probably missed a few. Python, Ruby, and PHP PEAR XML-RPC implementations have all had shell command execution and object/method permission access issues. I guess only time will tell about the quality their Java and Perl counterparts and any other implementations out there.

    Friday, March 23, 2007

    Pebble Rocks!



    Yes, Spring Mania (and no, I don't mean "March Madness" although it certainly fits!) has kept me up again. Either that or the first decent Italian Beef I had at Portillo's this evening.

    So the only more fun than building Virtual Machines, is installing .war's. One of the areas where J2EE (or a subset thereof, in my case meaning Tomcat) has got Python and Ruby beat is the ease of deploying apps. No GEM interrogation (selection 1, 2, or 3) dependencies or Cheese Shop silliness (don't even get me talking about the Turbogears Python installer, almost as bad as compiling uClinux (which also rocks, BTW). Untar, find the .war, browse, deploy go to the URL.

    On the wiki front Snipsnap and JSPWiki are pretty good that way, but I'm impressed with Pebble. It literally only took 10 minutes to get a first blog, and that was after pulling down a JDK, Tomcat 5.5.23 (which apparently fixed a bug with the manager servlet bug I ran into several weeks back when I was trying to the JSON-RPC server demo to work?!) set JAVA_HOME, change tomcat-users.xml.

    And what's amazing it is quick and even with all this cruft:

    root@ubuntu:~/apache-tomcat-5.5.23/webapps/pebble/WEB-INF/lib# ls
    acegi-security-1.0.1.jar jstl-1.1.2.jar
    asm-1.5.3.jar log4j.jar
    asm-attrs-1.5.3.jar lucene-1.4.1.jar
    asm-util-1.5.3.jar oro-2.0.8.jar
    cglib-2.1_3.jar pebble-2.0.1.jar
    commons-beanutils.jar radeox-1.0-beta2.jar
    commons-codec-1.3.jar saxpath.jar
    commons-collections-3.1.jar spring-beans-1.2.8.jar
    commons-fileupload-1.0.jar spring-context-1.2.8.jar
    commons-httpclient-3.0-rc3.jar spring-core-1.2.8.jar
    commons-lang-2.1.jar spring-dao-1.2.8.jar
    commons-logging-1.0.4.jar spring-web-1.2.8.jar
    delicious-1.10.jar standard-1.1.2.jar
    ehcache-1.2.1RC.jar tidy.jar
    jcaptcha-all-1.0-RC3.jar xmlrpc-1.2-b1.jar


    Of course this is within a VM with 384MB on a 2.8ghz/2G Pentium-D, but am I tempted to try this on a 96M Xen slice?

    Thanks, but no thanks I'll stick with blogger.

    Wednesday, March 21, 2007

    Yesterday's Firefox Update, AttackAPI, and Jikto (or the need for a CVSBSI)

    So the auto-update for Firefox 2.0.0.3 on my Powerbook hit last light (amazing how much I still find myself using my old 12" G4 despite the new box) and I briefly looked at the release notes. I was busy trying to get a release compiled for OpenBSD 4.1 in under a couple of hours while running under VMWare, so I saw port scanning, browser, Javascript, FTP PASV, blah blah blah andwent back to man release. But given all the hype about the upcoming Jikto release, I remembered PDP's attackapi and was looking for examples when I ran across the white paper and a nice blog providing summary of the technique and the vulnerability behind the new Firefox release. So I read it.

    Although the "Manipulating FTP Clients Using The PASV Command" article is well written and organized (I hate LaTeX generated docs, though) I was overwhelmingly left with a feeling of so what, everybody had to do a Firefox update for this? I think there needs to a Common Vulnerability Scoring Bullshit Index (aka CVSBI, hmm this sounds like right up the alley for CIAG Research, wonder if Andrew can free up some cycles from AGA-12 for this, or maybe this doesn't rise to the level of "CI?") that is perfectly valid but ranks the number of "hoops" the attacker (or victim) has to jump through to get it to work. Maybe it is just me, but something felt cheap about an attack that require victims to be lured vis a XSS to a rogue FTP server (there a multiplier of at least 2 here?!) that users the user name and password as the C2 channel for scanner. And all just to be able "sort of" scan ports (meaning being able to distinguish between closed/wrapped and open/filtered) and maybe to get banners and (oh my God, the end is near!) fingerprint services based on the time that they respond. And there were subtle other limitations/pre-conditions in the attack.

    At some point (not sure if this vuln is at that point) you get to the "solar system aligning" type vulnerability which is quite trivial to demonstrate under lab conditions but is more difficult in the real world (A case in point, I put TCP Reset Attacks against BGP into this category).

    And that is why we need the CVSBI. Yes, the Firefox FTP client implementation could have been tighter and, yes, this should have been fixed eventually, but it seems this could have been rolled into the next update where we could have rolled up a number of these vulns that score moderate to high on the CSVBI.