Tim Hatch

Weblog | Photos | Projects | Panoramas | About

Yet Another Markup Language 28 May, 2005

I’ve been playing with YAML for the last week or so, trying to figure out if it’s a good option to use for storage of small pieces of data which are generally arrays of dictionaries, and would store well in a database if I felt so inclined. YAML is a no-nonsense approach to storing data in a human-readable, human-editable format which is easier than XML. That’s really the best comparison I can make, it’s like XML except makes more sense from a structural perspective.

One slide show (which I think might be just a tad biased) described XML as “just like text files, only slower!” Some might argue that YAML is like var_export in PHP, only slower; however the syntax is very minimalistic and it makes sense to me.

YAML blends nicely with Markdown and SmartyPants to make a webpage, with the exception that the YAML “beginning of document” marker is a triple-dash on a line by itself, and could be confused with a heading in Markdown.

Yes, you’ve probably guessed that by now, I’m trying to sell you on the concept of using this instead of XML or (Heaven forbid) var_export (yeah, I use it because it’s easy). If you don’t want to be convinced, close your browser or press “next” on your feedreader and wait for the next post which won’t be quite so preachy.

Here’s a minimal representation of a document with an array of strings in XML:

<?xml version="1.0" encoding="utf-8"?>
<root>
  <el>string1</el>
  <el>string2</el>
  <el>string3</el>
</root>

That’s pretty readable, right? Now let’s do the same thing using var_export:

<?php return array(
  0 => "string1",
  1 => "string2",
  2 => "string3",
);?>

That looks nice, but provides a little bit of an issue if you want to add or delete an item, it’ll ruin the sequence numbers (admittedly, you can use array_values and a custom script to reindex them, but that’s curing the symptom by writing glue code which is not something I want to spend my time doing.

Now here’s that same data, only in YAML:

- string1
- string2
- string3

Yeah, you read it right. It handles associative arrays and other language constructs from perl, python, and php (why do they all start with p?) quite well too, in a way that won’t be hard to edit by hand should you desire to do so.

For a good short read that lives up to its name, check out the YAML in Five Minutes guide, which I finished in 4:52 after pondering the last couple of pages a bit.

Sadly, the only parser I can find for PHP is written in C as an extension, so I’m working on a minimalist subset that can be parsed natively in php without too much bloat. I’d really like to use YAML for some of the projects I’ve been working on. Although I have yet to find the part of the spec which specifies default character encodings, I think it’s safe to assume utf-8 won’t cause any issues.

2005: Year of the Spyware 27 May, 2005

I propose a change to the Chinese zodiac. Let’s make 2005 the “year of the spyware.” I’ve spent this last week fixing all sorts of Windows machines which had various levels of spyware slowing them down.

A 160GiB drive in a USB enclosure ends up really handy in cases like this along with Norton Ghost to back up a drive and reinstall from scratch.

I got to spend last Tueday with the lovely team of Agents J and K8 doing what most people call “chillin’.” In fact I went over to fix J’s Compaq laptop which is completely messed up and refuses to acknowledge the existence of its own Intel 2200 Mini-PCI card (which I verified is in the slot) under Windows XP Home, which drove me completely insane. Ubuntu, which generally is very compatible with wireless cards, didn’t want to admit that the card was there either.

I ended up having to give up on the card. At least it has 90% less spyware now, thanks to AdAware which was downloaded on a (ta da) Mac which was sharing its wireless connection over ethernet, which reminded me of this link which I sent to K the other day and she managed to get around to posting it before I did. http://www.pvponline.com/archive.php3?archive=20010509.

While fiddling with this for the better part of three hours, we watched Gilmore Girls (which is getting a little weird) and the One Tree Hill season finale — which although it isn’t quite as weird as Joan of Arcadia which my family was very enthralled with until it was (muwahaha) cancelled. (note: cbs has nicer urls)

While we were watching the shows on The WB, we saw an ad for Beauty and the Geek which seems to be very predictable/trite but was good for a laugh at least during the commercial for it.

Geek on TV
trying to impart knowledge “D-Day was in 1942”
Girl on TV
with a straight face, not realizing her mistake “Couldn’t be, Columbus sailed that year.”
Jessica
“Fourteen hundred ninety two, you friggin’ idiot.”
Tim
“But you’ll watch it anyway, won’t you?”
Jessica
“Well yeah!”

Subversion 27 May, 2005

At one point in time, I considered keeping one’s home directory in cvs to be a mark of sheer insanity. After all, why would one want to be able to recover old versions of files, or keep the contents synchronized among multiple machines, or allow a shared Firefox profile…

Wait a second, that’s not so bad after all.

We are getting new computers at the office, and I got to be a guinea pig for the new setup today (spending the entire day ghosting, copying, and redownloading). The part that was simplest for me to get set up again was the set of open-source utilities I use on a daily basis — UnxUtils, NetPBM, and Subversion. This was because earlier in the week, I set them up on a shared drive and made a .zshrc that needs to be copied over to set up the paths. That’s the only manual step, copying one file. Everything else just works, which is a rarity on Windows. I’ll be putting this on the server early next week to make synchronized updates a bit easier across the board for those of us that have Windows as the development platform prescribed for us.

About two weeks ago, I switched my home machine (a dual Athlon) over to Gentoo full time (the Windows drive isn’t even plugged in anymore) and haven’t looked back. I set it up with a subversion server because I could once again do so on a machine that would stay on 100% of the time. The previous server was a P200 that kept getting its power cord tripped on by cats — we have some very rambunctious cats — and therefore was down to the point that I didn’t rely on it and no longer kept any of my data on it.

The new server is available at svn://svn.timhatch.com/lab for the contents of my lab directory where I test out new stuff. That includes itms_parser, feeds/dfw_frys_ad, and a bunch of functions/classes that I end up reusing a lot, under lib.

For example, if you want to grab the source for the DFW Fry’s Ad parser, run svn co svn://svn.timhatch.com/lab/feeds/dfw_frys_ad/trunk dfw_frys_ad and make sure you take a look at the config files before trying it out in your local copy of Apache+PHP.

Star Wars Episode III 19 May, 2005

I just got back from Star Wars, Episide III. I went to the midnight (actually 12:02, gah why can’t they have “tha midnight showing” to make things simple?) showing and the theater (the new Cinemark here in Denton) was nice enough to let us in at 9:30p to wait in the theater. We sat K-T-K-T to save seats for Kasey’s family who was coming in from Sherman to see it with the rest of us (Kate and my younger brother, Ted).

I wonder, where on the Geek Chart are “Star Trek nerds who feel abandoned, and therefore watched the new Star Wars movie hoping it would have plot consistency (although it left a lot unsaid), consistent pacing (which it didn’t), and ended up mostly trying to spy a reflection of something neat in C3PO’s head”? I’m pretty sure that’s where I am. I was really disappointed by Enterprise and the angles Paramount was headed, and was hoping the “Final Episode” of Star Wars would be a redeeming sliver of Sci-Fi… apparently not. This bit of dialogue basically sums up my feelings (heard as we were leaving…CONTAINS A SPOILER):

.

.

.

.

Guy #1
“Wait a second, Jar Jar’s still alive?”
Guy #2
“I swear…”
Guy #1
“I can’t believe I waited in line hours and Jar Jar is still alive.”

Antigrain (AGG) under MinGW 11 May, 2005

Finally, I got it to work. It’s been entirely too long since I tried to compile /other/ people’s programs under a Windows environment. I’m not saying this is the right way to do it, but I wasn’t interested in doing it the right way, I just wanted it to work.

  • Use UnxUtils to get a workable shell (zsh, better than nothing) and install to D:/bin
  • Grab Dev-C++ to get MinGW (yeah, I know you can get it separately) and install to D:/Home/Dev-Cpp
  • Grab the Antigrain sources and extract to D:/Home/agg23
  • Grab some Win32 NetPBM binaries and extract to wherever you want (I used D:/netpbm4/bin)
  1. Start up your shell. This probably means doing a Start→Run: d:\bin\sh.exe.
  2. Add the necessary paths to your shell. This probably means export PATH="D:/Home/Dev-Cpp/bin;d:/bin;d:/netpbm4/bin;$PATH"
  3. Rename one of the makefiles, namely Makefile.in.MINGW32_NT-5.0 to Makefile.in.WindowsNT. Don’t ask. Run make. If all goes well, you have no errors. You can hope :)
  4. Copy agg23/src/libagg.a to Dev-Cpp/lib/ and copy everything from agg23/include to Dev-Cpp/include/ because we’re lazy.
  5. Now we’re going to try building a sample. This was the hardest part for me, because I couldn’t find the right switches at first, and had to take a second crack at it. I picked agg23/tutorial/t03_spectrum so cd there.
  6. Run g++ -o test t03_spectrum.cpp -lstdc++, which shouldn’t get chatty. If it gives the error below, you then it can’t find libstdc++ which is what I fought for the longest time.

Figure 1: Bad output when you forget to link with libstdc++

Machine# g++ -o test t01_rendering_buffer.cpp
d:\DOCUME~1\HatchT\LOCALS~1\Temp/ccqQbaaa.o(.text+0x1de):t01_rendering_buffer.cpp: undefined reference to `__gxx_personality_sj0'
d:\DOCUME~1\HatchT\LOCALS~1\Temp/ccqQbaaa.o(.text+0x21e):t01_rendering_buffer.cpp: undefined reference to `operator new[](unsigned int)'
d:\DOCUME~1\HatchT\LOCALS~1\Temp/ccqQbaaa.o(.text+0x317):t01_rendering_buffer.cpp: undefined reference to `operator delete[](void*)'
d:\DOCUME~1\HatchT\LOCALS~1\Temp/ccqQbaaa.o(.text$_ZN3agg13row_ptr_cacheIhED1Ev[agg::row_ptr_cache<unsigned char>::~row_ptr_cache()]+0x19):t01_rendering_buffer.cpp: undefined reference to `operator delete[](void*)'
d:\DOCUME~1\HatchT\LOCALS~1\Temp/ccqQbaaa.o(.text$_ZN3agg13row_ptr_cacheIhE6attachEPhjji[agg::row_ptr_cache<unsigned char>::attach(unsigned char*, unsigned int, unsigned int, int)]+0x48):t01_rendering_buffer.cpp: undefined reference to `operator delete[](void*)'
d:\DOCUME~1\HatchT\LOCALS~1\Temp/ccqQbaaa.o(.text$_ZN3agg13row_ptr_cacheIhE6attachEPhjji[agg::row_ptr_cache<unsigned char>::attach(unsigned char*, unsigned int, unsigned int, int)]+0x5f):t01_rendering_buffer.cpp: undefined reference to `operator new[](unsigned int)'
collect2: ld returned 1 exit status

Here’s the lovely test image produced by t03 to prove that it actually does work on my system.

Disclaimer: this seems to be ignoring the src/platform subdirectory along with src/ctrl so the examples don't want to build, but the tutorials work okay, and it may work for what I need (antialiased roundrect rendering to a ppm).

Links 2005-05-09 09 May, 2005

Kate
«zzzz»
Tim
“That sounded like French.” points finger
Kate
“No, that was more of a snore. French and English and even German people all snore in the same language.”

Continuing my links from earlier,

Data Comes Last 09 May, 2005

In so many of our discussions at COBA, we get around to the question, “How long will it take to finish site x?”

The answer, in most cases, revolves around

  1. Finishing Design
  2. Finalizing Images in Design
  3. Inserting real content

Why does content come last? This is something that we are all guilty of, ever single one of us. While our websites are being designed to be user-centered, our development of them is not. By forcing the IA to be set in stone early on, we pigeonhole ourselves into having “filler” pages, even if it doesn’t (in the end) warrant an entire page.

But what if the IA comes from the data that actually matters to users? Say we generate a page for finding information about degree programs. Should we have the degree programs listed, and build up from there, or come up with a design to plug in the content to, although we don’t know what the content consists of, exactly?

In this case, I’d say it’s a good idea to make a rough draft of the design in a day or two, get the content finalized, and then migrate the design to something better-tailored to the data as it’s now understood. This is what we did with the still-in-progress redesign of the Murphy Center site, and it’s coming out great – both semantic code and a simple stylesheet.

Clarification on “Free Time” 09 May, 2005

To clarify the previous post on the project codenamed “Bunt” (Better-UNT), I don’t feel any animosity toward COBA about “buying out” something I’d come up with on my own time. My comment mentioning “COBA wanted to acquire the project I did on my free time” was not anything against them, in fact the opposite.

This is somewhat tangentially related to Google’s 20% rule (software engineers can[/are expected to] spend 20% of their work time on a special project of their own choosing). I feel a little bad quoting this giant bit, but it nails down why this is a good idea from a corporate perspective, and how encouraging employees to be nerdy both improves their skills for work-related items, and may also give the employer a new product.

My suspicion is that 20% pet projects are a cheap way for Google to do new product development and prototyping in a light weight manner as opposed to being a way to encourage people to spend time working on their hobbies. This is a ‘code first’ approach to launching new products. Developers actually write cool apps and then someone decides whether there is a market or business model for it or not. This is in contrast to the ‘VP first’ or ‘talk first’ approach of new product development at places like Microsoft where new projects need to be pitched to a wide array of middle and upper management folks to get the green light unless the person creating the project is in upper management. At the end of the day with either approach one has to pitch the product to some management person to actually ship it but at least with Google’s “code first” approach there already is a working app not just a dollar and a dream.

Some Thoughts and Questions About Google 20% Time

The work environment at COBA is becoming increasingly like that of the open-source-welcoming companies nerds so adore — not for the relaxed dress code (believe me, any student worker gets a funny look if they’re wearing a polo shirt — “what’s the occasion?”), not the juicers (we wish we had one), but instead for the open exchange of ideas and the embracing of new technologies. The only student worker position I know of on campus that gets more freedom at work than we do has a DDR setup in his cube. That’s how open we are.

Experiments in Interaction 09 May, 2005

I should really quit mentioning neat projects to Cameron. They always seem to turn into a race to see who can get the code finished faster — and a PHP (me) vs Perl (him) race. The aim of the current experiment is to make a database of what holding the library here has, and specifically, what CDs they have, sorted by price (figuring the more expensive ones might be more interesting). Since the library makes it very difficult to extract such information (though not on purpose), I decided to make a mini-REST interface to it.

The UNT system, which appears to be a rebranded system the same as used elsewhere, has a number of deficiencies which make it difficult for a computer to handle. The first is that the data is not easily machine-readable, given that it’s dumbed-down for humans. The second is an idiosynchracy in search, and the third is the lack of clear status codes.

Dumbed-down data

Take this page, for example, LPCD 83207. There are three ISSN/ISBN numbers listed — only one is interesting to us (024 1 under the MARC display). This is the UPC number for this disc and is a unique key I can actually use to tell whether something I find on Amazon is the same or not (although I must express my distaste for the fact Amazon doesn’t publicize the UPC either, although you can search for it).

So, we’ve established that the easiest way to grab the UPC number from WebOPAC is to view the MARC info. This is great, except for the fact that it doesn’t include the current status of the item (checked out, due, lost, etc). So I’ve got to request two URLs… one of which may change data (status), and another which can be cached indefinitely (the MARC data).

Search… or not?

I’m looking for an entry with the local call number LPCD 83228 which is the second disc of a 2-disc set. This search should provide me a search page centered on it, but instead contains a big red “No matches found” message. If I try to browse around it by url hacking, I try this url but it totally skips over it till it finds the beginning of the next record. As far as an automated parser would see, unless I build in a lot of extra checks, is that this call number does not exist.

Status Codes

A search which returns no results doesn’t have any easily read code, just a warning to a human that says “check your data.” If there’s a system error, for example passing the code b for search type as in http://iii.library.unt.edu/search/blah if I forgot that I need m as an explicit search type if I’m looking for the local call number blah, I get a 0-byte response. Yeah, that’s very graceful, closing a TCP connection in response to an unknown search method. The system is both too chatty and not chatty enough, at the same time. I’m afraid the spaghetti to decipher what each response means is going to outweigh the amount of effort actually put into reading the data which I came to get.

In other words, I’m glad it’s an open system I’m developing, so I don’t have to redo this again later.

Result

Here’s what I’ve got after about 20 minutes of fiddling:

GET /lab/library/item/lpcd+83207 HTTP/1.0

<?xml version="1.0" encoding="utf-8"?>
<item id="LPCD 83207">
  <location>WILLIS 4FL MUSIC AUDIO</location>
  <status due="05-15-05">DUE</status>
</item>

It’s a decent start. I need to adjust the dialect to have an explicit <response> tag or <status error="true" recoverable="true" /> so the caller will know whether trying again is worth it, implement caching, etc.

On the Web, The Any-Eyed User is King 06 May, 2005

I keep alternating back and forth between writing long-winded posts about really technical subjects, and links to more of the same. Well, today it’s both!

The User is King

We’ve been on a usability kick at COBA (where I work) ever since I came back speaking its praises after taking a class with Dr. Steiner last year. Remember that the user is king no matter what, and in most cases you want to make things easy on him/her, to do things in the manner that they expect to happen naturally — i.e. affordances.

For example, when the user wants to add a class to their schedule, chances are they know something about it, they’re not stupid… but they probably don’t have the primary key in your database memorized. Therefore, give them a box either with a nice simple title suggesting what should go in it, or blank slate information that is recognizable. The current EIS implementation doesn’t even have any alt text on the “search for courses in this unnamed field” image-submit button.

The user should always be in control. There’s a good article on how to respect your users, and most of what it says is pretty obvious if you’ve worked with UCD (user-centered design) before at all. Basically, while the user doesn’t care about how you do stuff on the backend, and you shouldn’t force him to know, you need to be respectful that he’s not an idiot either and, like the first rule of retail says, “the customer is always right.” If they can’t decipher the interface, it’s your fault as the programmer, not the user’s.

Programmers are Users Too Though

Yeah, while the person who designed the system may know exactly how to use it, there are a lot more non-designers that need to use it every day. Take the UNT Library site as an example.

I want a no-nonsense search box, I don’t want to pick “Keyword” or know that “Title” isn’t full-text, it’s tied to the beginning. I want to put in something to narrow down the search only if I have to because web users are inherently lazy and want things that just work. I have had it in my noggin to redesign the Lib site and put an XMLRPC interface on it so people can extend my work further. Good systems are open and extensible. OPAC is not.

Links 2005-05-06 06 May, 2005

  • Fiddler — .NET-based proxy that allows you to fiddle with HTTP requests (viewing is a bit more natural than with Ethereal)
  • The Party Party — remixes of things people said (don’t play it on the speakers at the office)
  • Derelict Japanese buildings — oddly mesmerizing, they have a unique flair
  • I really need to fix these bullets, they don’t stand out at all

Preliminary UNT Search Info 06 May, 2005

There’s a project I’ve been working on for the past six months, off and on. I took a class in software development (with php) in Fall 2004 in which we were tasked to develop a way to automagically generate course schedules for UNT students given some basic preferences. Most of ours ended up better than what my.unt currently has in place, mostly because it doesn’t include stupid single-use sessionids. (Aren’t sessionids supposed to be good for, you know, an entire session?)

Set my codes free

I didn’t release this project publicly for fear that it would be ripped off by other students taking the course in subsequent semesters — my coding style is quite unique since I’ve been doing PHP for a while after coding Perl. It had a pretty interesting backend system to handle actions in a decentralized manner though, so I want to release it at some point, once I know they’ve changed the assignment.

A few months later, I fiddled around and started trying to find a way to filter tables in realtime. It turns out the DOM is not nearly as well-suited to what I wanted to do as I expected, and operations on thousands of table rows take a while (esp. when each one causes a reflow of columns!). I discovered a couple of JavaScript-related bugs in Safari’s rendering engine (and a couple of ugly ones in IE5/Mac, but those don’t matter now that Apple ceased bundling it with OS-X). I was trying to use <tbody> sections to separate out departments so you could show/hide a department, making it only reflow tens of classes rather than hundreds if you were trying to filter rows in realtime.

Why resurrect it now?

It went on the backburner (not really because of the bugs, but due to the semester actually requiring time to complete assignments), and I picked up some of the code again (drawing on several recent projects for ideas, including Google Maps, S5, LiveSearch, and Google Suggest. Yes, I know that many of those were out in some form in January, but I hadn’t delved into the code to see how simple it was to use XMLHttpRequest. The reason I picked up the code was my final math test this semester was at 11am and I was getting stressed thinking about it and needed to clear my head, so I started with the insane idea of incrementally loading a table via XMLHttpRequest and it went from there. COBA wanted to acquire the project I did on my free time (one of the coolest things I’ve ever been told after trying to distract myself before a test), and we’ll be releasing several pieces of it as BSD-licensed code in case others want to draw upon the same techniques we’re using. (I want to know what the nice-guy version of Embrace and Extend is called, other than “The UNIX Way”.

I’d suggest trying out what I’ve got finished so far before I delve into the history of it all — dept:csci lycm for an example of a query that’ll work nicely. Fiddle with the text box, and see if you can decipher how to use it without a manual. (That’s the objective!) The shortcuts are prof:, dept:, and bldg: if you need a hint. Anything else is a freeform search.

The Code

I love coding in Javascript. In many ways, it isn’t a “real” language, and the specs are a bit dodgy (especially when it comes to implementations of regular expressions, and \r\n in <textarea>s). But I like it, it has a certain zen-like quality. The Firefox Javascript Console and LiveHTTPHeaders are invaluable when debugging stupid errors like mixing up escape and encode when loading a URL.

I spent the week-past Friday reworking some really ugly code that dynamically generated a table, then spent this week implementing it with the server-side code to actually query a static version of the course database. All of this is happening on NetWare running PHP 4.2.4-dev, so to be honest I’m surprised we haven’t found more bugs than we have.

It’s now in a functional state, and although the Javascript-off degradation is not in place yet, it will be pretty easy to implement (a lot easier than going the other way — PHP that suddenly has to write Javascript RPC calls to return to the browser). Subconsciouly, we have separated content from its presentation from its styling, and can easily take the data returned as the psuedo-RPC call and turn it into a PHP function that echos HTML instead of Javascript that uses DOM methods.

It basically boils down to an onkeyup event attached to a textbox that fires off an XMLHTTPRequest to grab search.php?q=<search-string>, which does some basic logic on the query. That limiting is what I got to do today, so I’m posting it first:

/** @Input:  $_REQUEST['q'] which is something like "prof:a blah" to search
             for 'blah' in all fields, and 'a' under the 'prof' field.
    @Return: string with the WHERE part of an SQL query
  */
function build_query() {
    $spec_keys = array(
        "prof" => "Name",
        "bldg" => "FacilID",
        "dept" => "Subject",
    );

    // So the regexp results don't need to be trimmed individually
    $q = trim($_REQUEST['q']);
    global $term; //the semester code
    $terms = array_unique(preg_split("/\s+/", $q));

    // Items-which-will-be-anded,
    // not <a href="http://dict.leo.org/se?search=anders">Anders[de]</a>
    $anders = array();
    $anders[] = "(Term = {$term})";
    foreach($terms as $t) {
        if(strpos($t, ":") !== false) {
            //It's a special two-part limit
            list($k,$v) = preg_split("/:/", $t);
            if(!array_key_exists($k, $spec_keys)) {
                // Don't do this because we're returning a Javascript call
                //echo "Bad key\n";
            } else {
                // Add something to the list of stuff to limit
                $anders[] = sprintf(
                        "(%s LIKE \"%%%s%%\")",
                        $spec_keys[$k],
                        addslashes($v)
                        );
            }
        } else {
            $anders[] = anythinglike($t);
        }
    }
    $basequery = "WHERE ".join(" AND ", $anders);
    return $basequery;
}

/** @Input:  Term to search for (raw, will be escaped)
    @Return: A piece of a WHERE query
  */
function anythinglike($q) {
    // These are the important fields we're searching thus far.
    $nq = addslashes($q);
    return "((CourseDescr LIKE \"%{$nq}%\") OR (Name LIKE \"%{$nq}%\") OR
    (Subject LIKE \"%{$nq}%\") OR (Catalog LIKE \"%{$nq}%\") OR
    (FacilID LIKE \"%{$nq}%\"))";
}

This can be adapted for any fields you wish, and was originally concocted for a log parser I was trying to write a couple of months ago to accept Google-like syntax for search strings (in many ways, a universal command line) to find, for example, referer:yahoo transfer:1MB+ * /bycountry would generate a pretty sweet-looking graph of people who view a lot of stuff on my site after clicking a Yahoo link. The context of what you’re looking for can usually be figured out from the search terms, which is pretty easy when everything is dumped in SQL anyway (as logs would be in my case, or course info in COBA’s).

Useful Resources

So, you’ve tried the demo, you’ve skimmed the rant about how happy I was coding this thing. Now you want to try it out yourself (and learn Javascript the right way) — where to start?

I also found the following helpful in jogging my memory, and good reads overall (otherwise I wouldn’t link ‘em):

Initial Tiger thoughts 03 May, 2005

As of right now, I’ve had Tiger installed for 24 hours, and I’m loving it. It’s about 99 steps forward and one step back — that one step being Mail.app. I could probably fill out a whole entry with all the things I don’t like from my few short hours thus far with Mail.app. Anyway, before installing, I copied my ~/Library over to an external drive (and all the important stuff like ~/Photos and /Music too) and did a complete erase and reinstall. The install process itself took about an hour, but then I probably spent another hour or two setting up XCode, iLife, the iWork trial (till the full version shows up tomorrow), etc. I am taken with Pages, having used it to type a paper this morning, and Keynote, having used it for a presentation tonight. Neither had any danger of crashing, and didn’t harass me about being a trial other than informing me of the number of days left upon each launch.

Spotlight

The “Spotlight” moniker is really pervasive, and I keep finding more places they managed to sneak it in. It’s not just in the UI as far as the brighter reflections, iChat AV with multiple contacts and the reflective surfaces… I just found it in Chess if you click and piece but don’t drag it!

Speed

The UI fixes all the complaints I ever had about OS 10.3 feeling “sluggish” compared to Windows — and it’s not just the Apple apps. I notice that Firefox feels like it’s loading about twice as fast as before (maybe it’s just bouncing quicker in the Dock, which quells my fears of the spinning beachball of death). I did get the WBOD

Antialiasing

Some people call me a freak for noticing typography the way I do, but when ClearType is going overboard, it really irks me. I had the installer CD running for not even 5 minutes before it bothered me to the point I couldn’t look at it. It took Cameron about an hour before he noticed the same thing on his already-installed Tiger, apparently it has some sort of ├╝ber-strong Cleartype AA in effect by default, to the point things look crystallized with grainy colors.

Misc Other Things

  • It’s finally possible to disable the Capslock key (without a hack)!! Open the Keyboard Prefpane and go to “Modifier Keys.” I’d like to register my vote against the new “Show All” style while I’m at it, I liked my “shortcut bar” sitting on top.
  • Fink doesn’t like being installed from a directory with spaces. The webpage didn’t prohibit this, so it’s good to know. Here’s the part that specifically failed:
    gzip -dc /sw/src/tar-1.14.tar.gz | /usr/bin/gnutar -xf - 
    cp -f /Users/tsh0019/Desktop/Downloads/Apps/Unixy Stuff/fink-0.23.9/update/config.guess .
  • I finally picked a name for my laptop, after it has been `Tim-Hatchs-Laptop` for the last 14 months. I’m calling it `Prof-Farnsworth`. My username is also my UNT Euid so I can log into the CSP machines (a poor man’s vpn with `ssh -L`) easier. I need to use `sshkeygen` again, because my key has changed (hmm, wish I’d thought of looking up a way to save the old keys before I’d installed on top).
  • Cmd-I on a multiple selection in Finder now opens multiple info windows rather than a single sum-of-selection window. Just an observation of something that’s different, I’m not sure whether it’s better or worse.
  • Screencaps are now PNG by default. Cool. It was an unnecessary pain converting PDFs to tiffs before (and I was too lazy to set up a Folder Action).
  • I don’t know if it was this way before, but it’s really easy to trust an untrusted cert in Safari.
  • There’s something funky with Growl’s voice-selection box under Tiger. I don’t know if it had trouble before.
  • Grapher is nifty, and supports the basic TeX style `^` for superscript and `_` for subscript so it’s really handy. I wonder if they acquired this rather than writing it in-house thought, because the error is, hmm, less-than-helpful. Also trying to render a 2-D graph kind of takes forever
  • I’m playing with Adium instead of the previous Fire, after Fire started giving me trouble connecting to ICQ under 10.3. I wasn’t really attached to it but it did what I wanted. Well, I’m not really attached to Adium as of yet (especially the way logon/logoff events show up), but it’s worth a try if you haven’t yet. I have a love/hate relationship with the duck icon and the “quack” noise when an incoming message happens. Still trying to figure out how to use Growl with it.
  • I noticed that Sidetrack is Tiger-compatible, and installed it for the first time as soon as I was up and running (and made sure that the two-finger drag wasn’t working). This has got to be the coolest app I’ve used since Romeo as far as “wow-factor.” Speaking of which…
  • Romeo keeps disconnecting from my phone under Tiger when I’m trying to remote-control programs. I had no issues under Panther, so I’m assuming it’s an issue related to the new version (Cameron also had issues but he’s got a Symbian phone).
  • I have used Dashboard a fair bit (calculator, mostly), but don’t know how useful I’ll find it over the long run. I don’t like how the dictionary widget is not scroll-wheelable. I wonder if I should file a bug about this… but can’t because Dashboard doesn’t have a menu item for it. The prefpane used to select shortcuts doesn’t warn you about setting the same shortcut for more than one item at the same time, which seems like an oversight to be fixed in a point-release.
  • I love Pages. I typed up a six-page paper this morning without saving once, because it’s a mac and I figured it wouldn’t crash (I was transcribing from an already-printed paper, which was an oversight I forgot to backup before the new install). It fulfilled my expectations, and then some. As I dragged around an image and resized it, the text flowed realtime around it. This is something I’ve never seen another application do, and best of all, I haven’t had any images “fall off” to the first page like Word/Win does quite often. I want to find some more templates for it, and figure out how to make my own, as it seems like a cool system.
  • I have never used Keynote before, and managed to teach myself how to use it in the process of about 30 minutes to make a presentation to DCLA tonight. Of course putting content in the presentation and sprucing it up took about another hour, but that’s the same with Powerpoint too (I had some embedded screenshots, etc). It’s PNG-compatible, which is more than I can say for Powerpoint. It has automatic guides for centering and aligning elements together, just like Interface Builder does. The effects for transitions blow Powerpoint out of the water (my favorite is Motion Dissolve), and I don’t feel one bit awkward creating a slideshow from scratch here, because it makes it easy for me. With PowerPoint, I had to use an existing template and then tweak a bunch to do what I wanted.
  • Tiger comes with PHP 4.3.10, which was another nifty thing. Two uncommented lines via a `sudo emacs /etc/httpd.conf` and it was working with the built-in Apache. After a clean install, I was able to set up a php script to combine a bunch of rendered xml files into one final html file in the process of about 20 minutes in between classes today, grabbing a bunch of files via FTP over my Bluetooth phone. Ha, I can’t even imagine that working so nicely under Windows without installing lots of extra stuff.
  • Spotlight is a decent replacement for QuickSilver (which I have yet to install…) although it seems to re-scan my HFS+ external drive every time I plug it in via Firewire. The complaints other people have about items hopping around are alleviated if you use the keyboard to navigate around them.
  • iTunes is a lot more consistently stable without the ogg plugin. Oh well. I started my collection on my laptop from scratch, adding album art for anything I rip onto it. That’s fun when I have a single in my hands that Amazon doesn’t.
  • `ssh` seems to be taking an overly long time to connect to anything. It did it both on campus and at home, so I think it’s independent of the connection. Maybe it’s just a glitch.
  • They seem to have fixed one of my complaints, namely that reopening a directory didn’t refresh (in this instance, on a Samba share),
  • Note to self: need to find out if the iDVD hack still works to make it write to a folder when you don’t have a SuperDrive… my 16x NEC on my PC is a lot faster to make discs.

So, how’d I fare? 43.6GiB available on a “60GB” drive, which is about 12GB used for a baseline Tiger + XCode + iLife + iWork Demo + fink + basic apps. None of the basic apps are big, they’re things like skEdit (which works just fine). I’ve now put 1.5GiB of music on it and think I’ll be able to control myself a bit better after I got used to clicking “Yes, I know my startup disk is full, leave me alone” on Panther.