<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d10908474\x26blogName\x3dthrow+it+on+the+fire\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://theonlyaddressright.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttps://theonlyaddressright.blogspot.com/\x26vt\x3d9023833988464350215', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe", messageHandlersFilter: gapi.iframes.CROSS_ORIGIN_IFRAMES_FILTER, messageHandlers: { 'blogger-ping': function() {} } }); } }); </script>

About

Your humble host can be alternatively be found haunting these here tubes under many a different guise; I'd recommend starting your inquest at his usual place of residence. And there's always the Twitter.

Postcard Sci-Fi Monday, November 03, 2008

Check it.

Automatic sub-versioning with XCode, Addendum Tuesday, October 28, 2008

I was working on a very similar solution to the one Daniel Jakult presents for this problem, when a co-worker pointed me to the aforementioned post. I'd never heard of svnversion (bad me, I know), and, since I gleaned useful things from his original post, it's only fair I reciprocate with my improved script.

So, without further adieu, here it is. Follow his instructions for use; the trick is right-clicking the "Run Script" build phase and copying this script directly into the dialog. Make sure that "Shell" is set to /usr/bin/perl (add -w too, preferably).

A stellar day. Wednesday, March 19, 2008

Gabe, whom I always trust to always keep me up-to-date on the universe outside of my little contrived one, IM'ed to inform me that today - March 19th 2008 or 08-03-19, depending on how you write it - saw not one, two, or even three, but four gamma ray bursts observed by Swift (making today the first time in history for such a frequency).

One of them (080319B) was so bright as to be visible to the naked eye, albeit briefly, even considering that at a redshift of 0.937 it is about eight billion light years distant. With an apparent magnitude of about 5, it was almost as bright as Uranus: now that is one bright-ass 8-billion-year-old exploding star.

As an aside, don't forget to thank your favorite astronomer today for doing absolutely necessary and cutting-edge work. Getting a machine like Swift into space and then automating it well enough to have it find bursts like this without any human intervention is an incredible achievement. As Gabe says:
Transient astronomy is badass.
And really hard.

Update 03/30/08: here's the Wikipedia page on GRB 080319B, which highlights that this burst was even more remarkable than I realized; it set a record for the furthest object ever visible by naked eye. I say goddamn.

the monster, incarnate part duex Thursday, February 28, 2008

The term "web 2.0" has always particularly irked me: who decided it was time to bump the "version number", and much more importantly, why?

Everyone knows the story: Sir Tim Berners-Lee at CERN writes some newfangled "hypertext" thing, hooks it into some unknown "Internet" thing, and the WWW is born. Millions of people join services like AOL and Compuserve, the majority to look at a few very static web pages and find porn on the much-older and more established (read: more porn) USENET forums.

We might call this time "web 0.0", if we were into such things, but we're really not. In any event, it was certainly the dark ages of the web: finding content even remotely related to your topic of interest (sans porn, of course) was immensely difficult, and once found, you'd usually end up sated if only because the painful task was complete and cease the search right there.

The next age would bring light into the world of the web, but it would be the neon light of commercialism.

This age was, of course, the famed Dot-com boom. Recollection here should be immediate, so I won't bore with details. Suffice it to say the free market finally realized what these "Internets" might be good for, and capitalized with reckless abandon. This is also about the time that the age of web crawlers[1] was dawning: useful tools no doubt, but much more so when the haystack is as small as it was then.

So while these new tools might help you find a bit of meagerly useful content (mostly static, unattributed, and never primary source), in reality they just made it easy to find the best place to buy books or pet supplies, sell your car, your body, or even your soul. You can pour money into the monster indefinitely, but it'll always want more.

By the time the boom busted, the haystack had grown a lot larger. We're talking big. And it just kept growing, but it wasn't getting much easier to find anything unique, interesting; new. A couple of hippies named Sergey and Larry thought they had found a better way of searching the haystack, and they had... kind of. Searching more of the haystack faster and with more forethought is great, but we're still talking evolution - not revolution - here kids.

The problem wasn't that people weren't publishing to this medium. Without a way to publish and a way to propagate, content languished in obscurity and its authors quickly lost interest in producing more. It was that the signal - the unique content - was getting utterly drowned out by the noise: what are search engines but impressive noise filters anyway?

My point of this whole diatribe is this: its not a coincidence that the "social" web (the true meat-n-potatoes of this new age) and the "blogosphere"[2] have grown up in lock-step.

Case in point: del.icio.us. On the surface a minimalistic tool for storing links and categorizing them. Used to it's fullest it's an amazing way to find content that is much more likely to be of interest to you, if only because you've gone from relying on an algorithm's choice and are now relying on another human for guidance.[3] Without any extra effort of course, because we're nothing if not lazy.

It helps me to think of it like so: "web 2.0" and the social connectivity it has endowed have allowed all the single "point sources" of information (content producers) in the "universe" of the web to begin to organize themselves into more closely-connected groups. Only here can the self-publishing of primary source content thrive.

Each new piece of content is a signal, being sent out by its publishing site: it has some "strength" and therefore can only travel a certain "distance"[4]. In the early web, the content producers were so "far apart" that the signal was never picked up, so it dissipated. Now, the simple fact that content producers have begun to move "closer together" allows even low-energy signals to be picked up by another and possibly "retransmitted" (attributed, linked-to, etc), amplifying its impact in the space.

The "social" web has moved these sources closer. It has brought sources that were producing similar content to the point where nearly every signal is picked up by at least one other interested party, because the "distances" involved are so much smaller. This is why Gruber will consistently get picked up by TUAW and the like, but wouldn't ever be seen on Kos. Gruber and TUAW are seperated by a short distance; Kos is quite distant from both, but not at all far from a source such as The Hotline.

Now I realize this must be approaching a thousand words or so, and the writers at Valleywag would have my nuts for this, so it's time to Wrap. It. Up.

What is "web 2.0"? The technology hasn't really changed (bring up AJAX, get my fist in your face), so what did? People, and the way they use it, of course: we stopped searching the haystack one-by-one and started helping each other. Marx would be proud; the social revolution has begun!


--
[1] My choice back in the day was none other than the O.G. Gansta' itself, WebCrawler.
[2] For the record, a term far worse than "web 2.0".
[3] Yahoo! tried this for a long time, but through a single point of contact, and it's no wonder it failed. And no wonder they were the ones who bought Del.icio.us.
[4] When I speak of distance here, I mean in an abstract space of semantic distance where the further away two points are the harder it is for one to "hear of" and become interested in another.



CSI: Santa Rosa Tuesday, December 18, 2007

Hedwig flew this onto my desk this morning:
I am a Police Detective assigned to the Northern California Computer Crimes Task Force [NC3TF] in Napa, CA. Our task force covers 13 counties in Northern California (including Sonoma!) and provides computer and cell phone forensic analysis services to local law enforcement agencies. Our first iPhone came into the office last week as part of a murder investigation. The suspect used his iPhone to send and receive text messages before and after the crime and the information became critical to the investigation. We were able to obtain much of the phone’s data by parsing the backup files created when syncing the phone with iTunes. The problem became how to “translate” the information into a format that could be understood by non-technical investigators.

That is when we found your software. Syphone was able to quickly and accurately display the SMS messages in a format that will undoubtedly be understood by the officers, attorneys and more importantly, the jury.
So friggin' cool.

I may have written a big chunk of a gaming platform that millions use every day, but this is one of the coolest uses for software penned by my hand that I think I'll ever hear of. A murder nonetheless: damn son!

(Note: emphasis and hyperlinks added to quoted text.)

WebKit 3, mobile release? Monday, November 19, 2007

It seems to me that the 1.1.2 release of the iPhone firmware packaged a version of MobileSafari that is substantially faster at loading large pages, especially JavaScript-heavy pages. This is entirely speculative, but it looks like the iPhone surreptitiously got in on the new hotness that is WebKit 3.

Apple released both 10.5.1 and 10.4.11 recently, updating Tiger to ensure that even the previous OS had access to all the good new stuff in WebKit 3. As 1.1.2 was released just about the same time as these updates to OS X, it seems entirely logical that Apple pushed these changes into "mobile" OS X as well, especially considering all the talk about JavaScript's unfortunate performance on the iPhone to date.

Anyone done any digging? Let me know.

Update: Swannman suggested a simple-yet-effective method I'd completely missed for proof: user agent strings! Without further adieu, the proof is in the pudding:

-- Safari 3 on 10.5.1 reports:
Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-us)
AppleWebKit/523.10.3 (KHTML, like Gecko)
Version/3.0.4 Safari/523.10

-- MobileSafari on iPhone 1.1.2 reports:
Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en-us)
AppleWebKit/420.1 (KHTML, like Gecko)
Version/3.0 Safari/419.3

So while it looks like MobileSafari on 1.1.2 may not be running the latest-and-greatest version of WebKit, it is definitely endowed with a version in the 3.0 family, and one I'd gather has the majority of the speed increases touted in the "10 new things" post.

Now the really interesting question: what user agent does MobileSafari prior to 1.1.2 purport to be? Anyone with an older iPhone OS version can quickly hit this link and fire an email my way with the results, so we can put this one to rest once and for all.

Yearn for thee, ZFS Tuesday, November 06, 2007

My sister called me tonight, frustrated and frantic. She's working in iMovie for the final project of her college career, and OS X keeps yelling at her that her startup disk is full (and it is; only 190 megabytes remain). "It won't even let me add the last frames of the movie" she exclaims: it's clear that something must be done, and quickly.

As we sat talking through all the possibilities for clearing up disk space temporarily - most of which she's a bit squeamish to attempt solo - I mentioned that she could just use her backup drive to offload a part of her massive collection of pre-NBC-bitchslap "Office" episodes to clear up a good chunk of space for a while, then move them back when done. After whining about the time investment even that would take, eventually she assented: what else was she to do?

This is always when my inner1 geek tends to chime in: "If only she were using ZFS... a startup disk that was really just a pool could add space from the backup drive no és problema!" (Yeah, my inner geek has a bit of Mexican in him.)

Unfortunately so far, it's only been the nerds and fanboys that have cared, or even known about ZFS's (albeit read-only) inclusion in the release of Leopard. As such, the Mac-using public marches along with a file system that was originally introduced2 the same year my nearly-college-graduate sister was born. So sayeth "The Architect" himself:
"We've rethought everything and rearchitected it," says Jeff Bonwick, Sun distinguished engineer and chief architect of ZFS. "We've thrown away 20 years of old technology that was based on assumptions no longer true today."
If Apple Geniuses could tell Paul Photoguy or Greg Garagebandman that they could take that LaCie 1TB drive home and immediately add it as extra capacity to their pre-existing storage setup, I imagine the public as a whole would start getting a lot more excited about things like next-generation file systems... because dammit, we nerds are!

[1] I don't think I qualify for having an "inner" geek: it's all outer geek baby, geek all over.
[2] Yes, I know it was regular HFS released in 1985, and not HFS+. Even still, HFS+ is nearly ten years old at this point, besting ZFS by seven years.

Eh, don't worry, it's only 0.0004%. Monday, November 05, 2007

We make a product that, during the course of it's work, does a bit 'o disk partitioning. On Intel hardware (which uses the newer GUID Partition Table), Apple has defined a 128 megabyte "gap" partition that is created after any partition on large-enough disks. If you're a developer writing code that will be manipulating the partition table, you usually want to ensure that you stick exactly to the "letter of the law"; in this case, whatever Apple does is all the truth and justice you need. As diligent developers, our software respects this gap when creating the partition(s) it needs.

But when Leopard was released, we kept getting tech support calls exclaiming that any machines that had been partitioned using our software were failing to install the new OS successfully. Days went by and our QA couldn't reproduce it; then someone from Apple calls directly. Turns out it was our code: the gap partition we created was inexplicably one block too small. Since hardware block size is 512 bytes, and the gap partition is 128 megabytes, this accounted for a discrepancy of but 0.00038% of the total bytes in that gap partition. And it was hosing people's machines. Update: turns out the symptom was unwanted behavior in the installer (informing users they had to reformat their drives, something most didn't want to do) rather than actual data loss. Still, no good.

An effort was launched to track down the problem, and after pouring through code that no man should have ever put to disk in the first place, the offender was found.

As it all too often turns out, the problem was minor miscalculation in a small part of a larger calculation to determine the starting and ending addresses for the new gap partition. Considering that these "gap" partitions are just that, blank gaps left there to ease future software development, we were apparently getting by with the fact that no system app or installer actually checked to ensure these gaps were of the "required" size.

That is, of course, until the Leopard installer. In which Apple decided, for the first time, to check and enforce the size of these partitions, causing the installer failures we've been hearing about constantly lately. All because our code calculated 262,143 when it really meant 262,144.

If you've ever skimped on testing a bit of code where you thought an off-by-one bug might be lurking but weren't sure, hopefully now you see how much harm even 0.0004% can cause. (Especially when you're writing dangerous code to muck with partition tables directly!) Just imagine what would could have gone wrong if the offending code had been in a more sensitive or destructive algorithm than it was...