Alex’s point bears thinking about. LibraryThing is an online service that makes it possible to get your data back out, in a variety of ways—RSS and blog badges and mobile access, of course, but also plain ol’ tab-delimited or CSV export. And that’s pretty cool.
In the meantime, the rest of my books have finished importing (guess they were pretty backed up!), so I’m off to play with it a little.
If I hadn't been spending all of my free time on the new site, I would have told you about these new releases: 50 Cent's Massacre, 50 Ft. Wave's Golden Ocean, Ash's Meltdown, Boom Bip's Blue Eyed In The Room, Decibully's Sing Out America,Kasabian's debut (read Leslie's review), The Kills No Wow, Paint It Black's Paradise and Sam Prekop Who's Your New Professor. The last one was my personal pick of the week - Paul's got a review on 75 or Less.
I forgot to me mention it last week, but The Rutles 2 came out on DVD. We're about to wrap up the contest - get yourself signed up.
Here's a true store about Ash's old record label, Kinetic. They once begged me for months on end to run a contest. I'm serious - they sent me a weekly email like "We love your site and we would do anything to set up a promo with you." They eventually came up with a contest that was really cool. The prize was great - limited edition, signed - everything that makes a nice prize. They told me they would send me the prize after the contest was over. They, of course, never did. Wouldn't respond to my emails, wouldn't acknowlege that I was alive. Very classy move. So I became bitter and vowed never to trust anyone in the music industry (well, except for the good guys - you know who you are) ever again and started an art website. The end. read more:
Fueled by the discovery that a Starbucks in Washington has "a bicycle powered blender and the customers make their own drinks," Make Blog's Phil Torrone went bike-blender crazy, pulling together a great post on making and buying bike-powered blenders.
C&Lís Late Nite Music Club with Norah Jones Legendary music producer Arif Mardin died Sunday night. This was a very classy man, much loved by all the artists he ever worked with. And the artists he worked with were as diverse an array as anyone has ever tackled. For decades he was a star in-house producer with Atlantic Records, making records with Willie Nelson, Aretha Franklin, the Bee Gees, Average White Band, Barbra Streisand, The Rascals, Phil Collins, John Prine, Halls and Oats, George Benson, Average White Band and dozens more[sigma]
He retired from Atlantic in 2001 and a few months later he went into the studio with an unknown young woman for his pals at Blue Note Records. The result was the Grammy-winning, multi-platinum debut by Norah Jones, COME AWAY WITH ME. And our song tonight is my favorite from that album, a cover of Hank Williams[base '] classic "Cold Cold Heart."
In honor of Arif[base ']s passing I have 2 brand new box sets called WILLIE NELSON[^] THE COMPLETE ATLANTIC SESSIONS. If you wanna play this game, be sure to include an e-mail address. Just send us your top 10 songs (could be "my top 10 songs of all time," "my top 10 songs by women singers," "my top 10 songs with a political message," "my top ten songs with an alto sax," "my top 10 songs from the 80[base ']s"[sigma] anything). Tomorrow morning I[base ']m going to look [OE]em all over and pick one and John[base ']s gonna look [OE]em all over and he[base ']ll pick one too. And the two winners are going to each get a box of Willie.
(guest blogged by Howie Klein)
I missed the death of Arif Mardin, not hitting the mainstream news.His is a name I remember seeing on so many albums as I grew up. All those wonderful soul albums. I will always associate him with the sound of Aretha Franklin.
Early this week, in an email to a coworker, I mentioned that I made music and pointed her to my site. On Thursday she wrote back and said “cool, you’re even on iTunes!” This surprised me; my two albums were submitted about 3 and 8 weeks ago and hadn’t shown up on iTunes as of Monday or so. But I looked, and indeed, there they both are on iTunes. For those of you who’ve heard the music, I’d appreciate a customer review. For those of you who haven’t, what are you waiting for? :-) Of course there are also old-fashioned shiny discs in plastic cases. Thanks!
Is Java still ''cool?'' I've got a short piece featured over on java.net today, it is actually a response to a thread started by Brett McLaughlin and Kathy Sierra. read more:
Skills for Access If this site isn't a testament to beautiful design, and advocating, demonstrating and teaching accessibility, then I don't know of a better example. Also covers multimedia accessibility: Flash, Shockwave and external viewers. Great resource, thanks RJ. read more:
The CSS Box Model Hierarchy For developers new to CSS and the box model, this is an excellent 3D visual aid. I also highly recommend following the link to Douglas Livingstone's interactive Flash demonstration version. read more:
great food conversions site I just found this site which helps converts food measurements, thought I’d do some sharing:a great variety of conversions:cooking conversionsI was particularly impressed with the one below:accurate ingredient / gram calculator - this one allows you to put in the food and your measurement and gives you the grams…very cool for those 1tsp measurements of [...] read more:
New and Cool Wallpapers! In a departure from creating my own work from scratch I decided to download some hi-res Nasa images and create wallpaper from them. 10 wallpapers have been added to my new Mars gallery. Each of these are given the proper credit as to their origin. This is just the begining - I plan on offering many more planet and space related wallpapers in the coming months along with my usual photos. read more:
How to get high Google rankings with Flash sites Flash movies are a great way to add multimedia elements to a web site. Unfortunately, Flash cannot be indexed by most search engines. For that reason, it is very difficult to get high search engine rankings for Flash sites. This article explains how to get top rankings on Google with Flash sites. read more:
How to rank well with Flash movies Flash movies are a popular way to make websites more compelling. They are useful if you want to impress your website visitors or if you offer web design services.Unfortunately, if you use Flash movies, or if you even design your complete website based on the Flash technology, your odds of getting listed in the search engines are greatly reduced.Read this article to find out how to rank well with Flash movies. read more:
Solving big business problems in our little toolbox application. A use case for Project Distributor.
Project Distributor: Introduction to our distributed web service model So Darren and I have put in about a month now on the Project Distributor website. We are starting to reach that critical point where the site is pretty cool, we have plenty of users, we are thinking about running out of the allowable bandwidth for the demo site, and all sorts of other things that tend to happen all at once. Now, there are some problems you can design yourself out of, and others that you really have to throw some money at. Our latest enhancements can be summed up in a short list.
Buy a domain name and start hosting in two places. Project Distributor.com should be up fairly soon to accompany MarkItUp.ASPXConnection.com
Have people host their own versions of the application. And that means a big source release is in the future. At this juncture risk fragmentation.
Design away fragmentation with a series of ingenious features that will make everyone want to use the application at hand.
I'm here to talk about the last two, since Darren already bought some additional hosting for us. The concept will be to release a fairly stable version of the application so that groups can host tools, code snippets and other source/binary releases for their teams to share. The application is very lightweight and easy to set-up, so it won't require a bunch of hand holding and configuration to get up and running initially. From our standpoint we solve a number of issues at this juncture. The most obvious problem is what we classify the Lutz Roeder use case. .NET Reflector is the key type of application we'd love to get hosted because it makes it a bit easier to find, not that Google does a bad job, we'd just like to get a bunch of tools in one place, with some features for feedback, new releases, and some cool client tools for publishing.
Now, Lutz would put his application up and he'd whack our bandwidth. He is the prime example of someone that should be hosting their own tools, but possibly using our interface. He doesn't have to, we haven't even asked him yet in fact, but if he decides to do so, then all the better for the web application moving forward. Users such as Lutz probably want a certain level of control over their own sites as well in terms of branding and controlling access. This will only come from hosting the application yourself (and maybe some other features we'll see later).
From a security standpoint many teams will also want to host their own servers. In this manner they get control over the hardware their sources and binaries are stored on. They can accept tools up to any maximum (instead of our imposed limits) and provide unlimited download bandwidth if they choose. Or they can take advantage of our gating mechanisms to make sure their server doesn't get overloaded with downloads and open their tools up to the public.
The only major problem from this source release is that the initial problem we were trying to solve, promoting the visibility of tools, starts to erode. You see, the more sites that host their own tools the harder it is to find the right site with the right tools. We are trying to solve this in a number of ways. The first is allowing users of a site to store bookmarks to other projects and external resources. This is only a temporary fix, because it still doesn't allow a mass search and categorization infrastructure required to truly promote the visibility of the tools being hosted. We have to come up with a solution that brings all of the sites, but we don't want to create just another portal or gateway site. That is boring. Now you have the background, so how will we solve the fragmentation issue?
Designing away Fragmentation I won't lie to you, I've implemented this model several times, but have never had a project that was capable of really showing off the feature set we are about to talk about. The concept is to unify all of the sites, by allowing them to easily manage views of data from all of the sites combined. Each site owns their own content, maintains their own users, but in turn peers with other sites to obtain additional content.
Web services provide a dual feature set in this model. At the current level they allow us to generate really great client-side tools for managing, well, your tools! We have a drop-client target right now so you can drag and drop new releases to existing projects in just a few seconds. Some new tools for working with build systems to promote the source code up to the server are in the works. We natively integrate with your RSS reader and will have our own alert services in the drop client just in case you don't have one. There aren't any search or local caching features, but those are also planned for the drop client so you can background download new releases, just like Windows Update.
That doesn't solve fragmentation though, that just makes me realize how much work I have left to do. The second feature of web services lies in the ability for each site to aggregate data from the many other sites that are out there hosting the application. Remember, everything we make available at the service layer can also now be remoted. The more caching we put into the data layer, the more performant the entire process will be, and we can even tune the caching depending on whether the data layer is merging off-site contents or database contents.
Peer Sites I'm sure there is another name out there somewhere, but for the past 2 years I've called these peer sites. Each instance of the project distributor will have a number of options allowing for adding peers that will be aggregated and added to the local collection while users traverse the site. The first step is to get the peer sites running in a read-only mode. And set up some really great options so the entire process can be controlled. This solves a number of use case scenarios for us including the following.
Fragmentation can be mitigated through proper configuration. If everyone aggregates 5 or 6 sites into their peers, then we have a huge network now of interconnected peers and users can pick and choose which one they use for purposes of searching the tool network.
Peer connections are unidirectional or bidirectional. Access is configurable. Teams can include tools from external sites while keeping their own tools completely private. They can exist behind a DMZ or a private network.
Users can host their own personal tool sites in the same manner as the team sites. They can configure statically which projects to make available even. In this way you can build a collection of personal tools that you love, and have the latest information automatically update on your machine for your perusal.
Peer sites solve plenty of visibility issues, but that is pretty much all they solve for now. We still want to enable all of the features available to the client tools. After all, the web service methods and proxy infrastructure is in place to do so much more.
Master Sites Well, we want to solve another problem. That is where you edit your data. A master site is where the users, groups, projects, etc... are all hosted, but thankfully, you'll be able to log in through any site (assuming it is peered with your master site) and then edit your own projects and such. This is a remote principal context and is actually one of the cooler features associated with the peering functionality of project distributor. We'll be fully secure in our login and credentials region, but unfortunately we'll still be transferring data in open text in the short term. Maybe we'll fix that with enough push back.
Clone Sites A clone site is where we empower a site to act on behalf of a master site. For me, my local project distributor is currently cloned to the main project distributor site. What does this mean? Right now it means I get all of the data from PD, and that users who trust my site can log-in to their project distributor accounts and cross edit data. Pretty nice if you ask me. It basically means you can fully host a project distributor installation and never, ever have to install a database server. Users can just act on behalf of a remote server.
Configuration This isn't a super reusable model like some of those you read about in the popular software architecture books, and it probably accounts for why master/peer/clone sites don't exist very often. The considerations for every option are heavily customized to the problem being solved, and I'm sure we'll be making modifications or updating the configuration context for a while. Right now you can independently configure your primary server type, whether master or clone, whether or not users can use you for a pass-through authentication and edit server, whether or not web services are enabled so peers can enable unidirectional only communications, setting up asymmetric security credentials. Man, you name it and it is in there
For the peer section we have full and selective modes. A full peer pulls all of the data on the remote peer locally for display (in a delay caching manner, just like you'd expect, unless you set up a scheduled pull which is also possible). I expect most people to configure full peers because they really are really easy to set up and maintain. A selective peer is where you specify the groups/projects that you want to display. This is best for a user setting up their own personal toolbox who wants to select a couple of items from many different peers.
We have an extensively exhaustive configuration module already and we'll be continuously adding more to it. The concept is to easily modify your toolbox to your own designs without having to touch the code. If we haven't given you enough options to satisfy your need then we'll have to make something up, because I'm just about running out ;-)
These are the basics of the model ideas I have for project distributor. That doesn't mean Darren doesn't have other great ideas happening as well. He has some pretty extensive UI enhancements, but I'll let him talk about those. We even have another product idea that is kind of a bolt-on for project distributor, but that is probably a couple of months out putting it into next year. Unfortunately we have too many ideas for our own good right now. Better than not having any ideas I guess. I'll try to drop some code with some of the ideas above, that way you can get a look at how the entire system is implemented. I have some diagrams as well, but I'm far too tired right now to add the img tags to the HTML view.
Language parsing and compiler design doesn't have to be hard, but boy this book really sucks!
How'd you like that for an opening title? Did it grab your attention? Hell, your reading this far so I guess it did. The book I'm focusing on here is Build Your Own .NET Language and Compiler and please, don't click the link and then go buy it. I don't care about the 50 cents worth of referral money I'll get if you do. I wouldn't even recommend the book if I got 50 bucks of referral money (well, money talks, so maybe I would).
The book starts out with the basics of parsing and regular expressions and all that jazz. But the extent of the code is a bunch of screen shots. We are writing a parser/compiler dang it, we aren't WYSIWYGing our way through life at this point, you have to show some real frigin code. What you end up with is a bunch of screen shots of many tools for writing a compiler, but not really the code, unless of course you go grab the CD and break through all of the code without a lick of explanation from the book. God I hope the code is well documented with comments, or you just bought an issue of Compiler's Illustrated and this isn't the Swimsuit edition. I'll include some of my own links at the bottom, where I give actual code for many of these processes.
OK, so you get to see a bunch of tools, and what do you get? Well, you get a bunch of half-assed tools (sorry for the language if your kid is reading my highly technical blog... In fact, if he/she is I could use some interns, must type 50+ WPM and be proficient at C, C++, or C#). A mathematical expression evaluator is the first. I think it is always the first. People always trivialize math. So make sure you look at all the pretty pictures and try to glean some wisdom from the text. I have a mathematical expression evaluator by the way, it's called calc.exe and from what I can tell it has shipped since 16-bit windows. He also makes an attempt at a regular expression workbench. You can't have enough of those (actually I'm not being sarcastic here, I always appreciate a new regex tool), but then he never writes anything or demonstrates compiler technology that uses regular expressions. Does he go into NFA/DFA technology? Well, he does talk about it for a few sentences. BNF format? Again a few sentences here and there. But wait, another tool is what you get and this time it is a picture of a drop-down menu with all sorts of really tantalizing names (convert from BNF to XML, display a BNF parse tree, display formatted docs, etc...). At this point use one of the pages to catch the drool coming off your lip, because that is as close as you'll get in this book to anything cool.
OK, so forget the tools. At some point he actually starts talking about real compiler technology. I think around chapter 7 maybe? I really should dig up the TOC on Amazon, but I'm only going to waste enough time on this book to finish this posting. Anyway, they start talking about the various parsing techniques. Recursive descent (RD), Top-Down, Bottom-Up... I think there are some other odd names they throw in there to mystify the reader. After reading all of the major compiler design books I shouldn't be mystified by something that could classify as a 4 Dummies book (unless it is something like Cross Dressing 4 Dummies, I could probably use that after my Halloween party)... Anyway, they really don't do the entire process justice, and I think at some point some more tools are used, Yacc might be mentioned, and bam, back to the pictures.
At this point I want to identify the worst problem I found throughout the entire book. Apparently the author didn't have time to finish the code so they left a bunch of exercises for the reader. Nah, nah... You don't leave the compiler as an exercise in a book on how to write a compiler. You leave bits and pieces, but not the important stuff. Going through my Knuth books, I'm actually surprised when he leaves problems as exercises that require more know-how than what has been provided in the chapter. I don't mind exercises for the reader, but there is a limit people. Imagine getting back from Home Depot with a 300 page picture book on building a house, that had a bunch of pictures of completed homes, and some text offering that the building of the house will be left as an exercise for the reader. Doh!
At the end of the book, it is apparent I'm not going to get anything of use and then it starts talking about code generation. Oooh, something with some meat. In reality, they've been naming their nodes for the calculator in such a way that the name of the node was pretty much the name of the op code that was going to be called. They may have some Quick Basic implementation code spits as well, but I'm confused at this point (and mystified) because I've been thumbing this book for an hour. In reality the act of spitting IL is probably worth an entire book of it's own (oh wait it is Inside Microsoft .NET IL Assembler and you really should buy this one so I get 50 cents). That isn't fair because that book is actually how IL functions and not how to spit it. But I'd think one does precede the other since eventually your going to run out of node names to match to IL op-codes and when opComplexOperation isn't mirrored by OpCodes.ComplexOperation I just don't know what you'll do.
How fair of a review is this? Well, I've read actual compiler books, quite a few of them. I've implemented my own parsers and compilers many times for many different circumstances. I don't think it is a hard process and I think extending the process to a more general development audience is important. There should be a relatively accessible book on writing your own .NET languages, but this book is certainly not it. I'll keep looking around, I hear there is another book focused on .NET language generation and I'll have to search it out. Maybe an O'Reilly publication? Can you get an accurate review from something in about an hour's time? Well, I read fast, the words were quite large, most of the content was entirely familiar and only about 30% of the page material was text, so I'd hope so. Take this for what it is worth, but if I see any referral money for that book, I'll know someone is going to be laughing hysterically when they get that book in a 2-3 days from Amazon. PS: I didn't and won't buy the book. I spent a couple of hours at Borders today running through two books that caught my eye when I was really looking for a great .NET Localization book. I need to dig up Michael Kaplan, since I'm sure he has written something somewhere.
Anyway, the cool thing about this FeedFlare add-on is that Rick Klau — our VP of Business Development; not an engineer — created it from scratch with (almost) no help. The FeedFlare platform allows anyone (yes, even Rick *wink*) to create something truly useful and valuable — something that isn’t just “neat” but actually informative from a business point of view.
Tutorials - Photoshop,Dreamweaver,Vb.Net. Photoshop, Dreamweaver, Excel, Flash Mx, Vb.Net, Spyware + Windows Xp Video Tutorials from $14.95 to $49 - Affiliates earn 50% read more:
MSN Virtual Maps Has anyone checked this out yet? MSN Virtual Maps? It is so cool, you can look up your house, business, city, anything, and it will show you a satellite picture of it, that you can zoom into, scroll North/South/East/West read more:
Clickbank RSS Feed Tool and Multilingual keyword analysis Another cool RSS feed service today... Ambatch offers a Clickbank RSS feed for you to provide product data for integration into weblogs or with other RSS based scripts or publishing tools. Ambatch is the creator of an automatic translation tool... read more:
AmazType - your word in book cover artwork Amaztype is a nice new application from Japan, drawing your words using Amazon book cover artworks... your have to check this out - Flash is required and it's definately fun to play around there and zoom in/out of the covers...... read more:
Cool Web Marketing Tool lists a great SEO / SEM toolkit list read more:
Chips in the Kitchen
From dishwashers to food mixers to even the simple electric kettle, kitchens have long benefitted from labour-saving devices. Now Microsoft are upping the ante even further with their fully-automated cyber-kitchen. The fridge door boasts a central "family information" screen, barcode scanners automatically program correct cooking settings, and the whole room is packed with RFID chips (the radio technology currently being implemented in areas as diverse as pet identification and retail security), so should you pop a mixing bowl and flour on the work surface the kitchen will start offering recipe suggestions, projecting them onto the worktop (see picture). This all sounds fantastic, however how it will hold up in the real world is in doubt. Its going to be hard to read a bread recipe projected on a worktop if said worktop is covered in flour. And LCD screens may look ultra cool dotted around the kitchen at first, the sure as heck won't look so good by the time the touch screens are covered with sticky finger marks, or spattered with cooking oil from the nearby frying pan..
Concealed Weapon Permits Win Sheriff, Police Support? Watch Video News Blog (8 min) A growing number of Sheriffs and Police Officials have joined the debate over Concealed Weapon Permits (CCW) as shown in an eight minute Full Disclosure Network™ Video News Blog featuring high ranking law enforcement officials in the Western United States. Available FREE at this URL: http://www.fulldisclosure.net/flash/VideoBlogs/VideoBlog31.php 24/7, on demand as a public service. (PRWEB Jul 5, 2006) Trackback URI: http://www.prweb.com/zingpr.php/Q3Jhcy1TdW1tLUluc2UtUGlnZy1JbnNlLVplcm8= read more:
TattooFinder.com Announces Free Premiere Accounts for Tattoo Industry Professionals TattooFinder.com announces the release of Premiere Accounts (TFPA) as a free upgrade from a standard TattooFinder.com account. This service is offered exclusively to tattoo industry professionals, providing top level discounts on design purchases to tattooists and their customers. Premiere accounts can increase overall business at a studio by offering numerous competitive advantages, and a TFPA operates under several different business models to best fit a shop’s needs. A TFPA provides the ability for customers to purchase flash for the tattoo studio that the studio can store online and access again for future use at no additional charge. (PRWEB Jul 13, 2006) Trackback URL: http://www.prweb.com/chachingpr.php/WmV0YS1Qcm9mLUNvdXAtU3F1YS1JbnNlLVplcm8= read more:
Core DJs Houston Retreat Featured on www.TyMeLyNeLife.com : History of Hip Hop Online Reality Show The Core DJs, an organization of influential DJs and music industry professionals, are being featured in the “History of Hip Hop” online reality show series at TyMeLyNeLife.com.The historical online reality show features performances from music business legends like, MC Lyte, Dres from Blacksheep, Marley Marl, Mixmaster Ice from UTFO, Bun B, and Kool Herc. Plus stand up performances from artists like Farnsworth Bentley, Dre from Cool and Dre featuring Dirt Bag, Houston’s own Mike Jones, and Slim Thug featuring The Boyz-N-Blue. (PRWEB Jul 2, 2006) Trackback URL: http://www.prweb.com/chachingpr.php/RmFsdS1UaGlyLVByb2YtU2luZy1JbnNlLVplcm8= read more:
Coupons are Cool In a recent survey, CoolSavings.com reports that 60% of people who responded find online coupons useful when shopping. (Maybe not so surprising as this is the line of business CoolSavings.com is in.) The survey also shows that promotional offers and... read more:
Error ''Codec Initialization Error'' when attempting to export as Flash Video (FLV) (Premiere Pro 2.0) IssueWhen you try to export a Timeline as Flash Video, the export fails and Adobe Premiere Pro displays the error message "Codec Initialization Error".DetailsYou are exporting to a hard disk with low disk space.SolutionsDo one or more of the... read more:
Supported file formats in Adobe Premiere Pro 2.0 This table lists the file formats that Adobe Premiere Pro can import and export. For more information about importing and exporting files, see Adobe Premiere Pro Help.Format Import ExportVideo Flash Video (.flv) - - xMicrosoft AVI Type 1 (.avi)... read more:
iPod's Coolness Waning as Popularity Grows New report from The Diffusion Group confirms that "Cool Factor" is losing ground as primary reason for iPod adoption. [PRWEB Nov 1, 2005] read more:
SEO Chat Forums - Flash Elements on Index Page Date: July 17th, 2006 02:44 AM - Levi - UntitledPost: If I create 2 versions of my site, a static & flash version, can I possibly suffer from a duplicate content penalty since the content... read more:
Robotics has been one of those things that I find interesting but didn?t know a whole lot about. But ran across this great site on MSDN with a download for the Robotics CTP. It also has some great getting started materials and tutorials. Very cool stuff!
A very cool little power toy that I just ran across
The Best Practice Analyzer ASP.NET (alpha release) is a tool that scans the configuration of an ASP.NET 2.0 application. The tool can scan against three mainline scenarios (hosted environment, production environment, or development environment) and identify problematic configuration settings in the machine.config or web.config files associated with your ASP.NET application. This is an alpha release intended to gain feedback on the tool and the configuration rules included with it.
Fedora on VMWare on Ubuntu I've felt bad for some time now that I'm behind the times when it comes to virtualization. It's obviously potentially very useful to any developer. And with Intel Macs and Apple currently advertising Parallels Workstation to prospective switchers, virtualization is pretty mainstream by nerd standards.
And still I'd been waiting for it to become easy enough for me to try.
You might be wondering why I think I'd have difficulty installing or using a product that Apple recommends (even though it's a potential competitor's product), but my Macs are all still PowerPC, and my Ultra 20 is amd64 and, somewhat foolishly given the current state of the onion, I opted to install Ubuntu/amd64 on it. So Parallels' i386 .deb is useless to me. I'm also a little put off by the fact that when I tried to download the Parallels 2.1 Linux installation guide, Ubuntu's evince(1) couldn't display it. 'Error 3', I'm told, somewhat unhelpfully. (Getting ahead of myself, Fedora/i386 failed to install the .rpm with 'Missing dependency: libXft.so.1 is needed by package Parallels', and likewise failed to display the PDF installation guide.)
As for VMware, I may have mentioned before my disappointment that VMware Player didn't work out of the box on Ubuntu 5.10, and I may also have mentioned my disappointment that things didn't seem to have gotten any easier with Ubuntu 6.06. Specifically, if you choose 'VMware Player' in 'Add/Remove Applications' (easy enough so far), you're shown the following text:
Free virtual machine player from VMware The free VMware Player lets you run pre-built virtual machines on your desktop.You can run multiple operating systems side-by-side, easing the process of software development, testing, and evaluation.Virtual machines developed in VMware Workstation, ESX Server, or VMware Server can be run in VMware Player.To run the VMware Player, just run /usr/bin/vmplayer from within X.
Note: You will also need the VMware Player kernel modules to run vmplayer. These can be built from source from vmware-player-kernel-source, or you can install a pre-built vmware-player-kernel-modules package for your kernel.
I don't know about you, but I found that note pretty off-putting. Yeah, building kernel modules: that sounds like fun.
It turns out that if you pretend the note isn't there and just install anyway, you're automatically given suitable pre-built packages. So there's actually no hassle. VMware Player just appears on your 'System Tools' menu, and it works just fine.
I've no idea why they went out of their way to put people off in the package description.
Anyway, the next problem is that if you go to VMware's web site and want to download a RedHat image you'll find that RedHat want you to register for a trial. Luckily, thoughtpolice.co.uk offer a selection of images for hassle-free distributions. I tried the Fedora Core 5 one, because I wanted to test the software.jessies.org RPMs.
Performance is pretty good, though graphically the guest OS runs slightly more sluggishly than the host OS did before I installed the non-free NVIDIA drivers, which can make the guest quite uncomfortable to use other than on the command-line. Using Firefox to surf the web isn't obviously different from the host OS, but trying to start something from the 'Applications' menu can be quite tricky, with the selection highlighting lagging behind the mouse pointer. I certainly couldn't imagine doing anything more serious than a bit of testing.
The guest OS' clock is wrong, despite me telling Fedora to use NTP. I've no idea why.
A particularly annoying problem is that there's no obvious way to make clipboard transfers in or out of the virtual machine. That's really quite annoying even when just playing, and would be crippling if you were really trying to use both OSes.
It's pretty cool to be able to close the VMware Player window, stop the virtual machine, but then come back later (even after the host OS has rebooted) to exactly where I left off. I can reset the guest OS to its original state, too, by removing the '.vss' file. There's nothing in the interface for explicitly making machine state snapshots, though, or reverting to earlier states. Which seems a shame. Presumably the paid-for version has this.
I find it offensive that Apple's QuickTime Player that ships with Mac OS is full of grayed-out menu items saying, in effect, 'you're not really welcome to use this OS you think you've already paid for, but give us more money, you hateful plebs, and maybe then we'll consider letting you use the rest', but the good thing about that is that at least you can see what you're missing. In the case of VMware Player, which is effectively a demonstration version of commercial software, you'd think it would make sense for the demo version to let you explore all the full version's functionality, even if you can't use it all.
So that's what I think of VMware Player. What about Fedora?
I hadn't used an RPM-based Linux since about 1998. I hated it. Debian, for all its faults ('would sir like stale Debian or broken Debian?'), restored some of my faith in free Unixes. RedHat was just one long nightmare of manual package dependency resolution. If it hadn't been for the desire to test our RPMs, I'd never have thought of trying it again, even though I've heard of yum.
Also, for some reason, amongst the people I know, Fedora seems to attract the KDE users. That was another reason I'd assumed I wasn't missing anything. The default, though, seems to be GNOME, so those people must have deliberately inflicted KDE upon themselves.
Fedora's boot process looks a bit nicer than Ubuntu's. Ubuntu doesn't have a fancy graphical lilo/grub stage, and when it starts booting proper, it's in some ugly low-res mode with dark brown text on a black background. I'm sure the coprophiles in the audience love that, but for normal people it's not so great. Fedora has a plain Mac OS-like display while loading, but has a disclosure triangle that lets you see something like what Ubuntu shows by default. Neither, as far as I can tell, let you see the full unadulterated Linux boot noise. (But then you wouldn't guess how to see the full output on Mac OS. It makes you choose ahead of time by holding down command-v before the graphical boot starts.)
Fedora's default desktop is a lot less brown than Ubuntu's (again, scatologists may disagree as to whether this is really an improvement), and there's a better range of available background images. (One flaw is that it has the worst-ever icon for Firefox. I had to wait for a tooltip to convince myself it was a web browser. I don't know why the various distributions go out of their way to re-brand Firefox when Firefox's own logo looks fine at a variety of sizes, and is pretty well-known even in the general population. The number one problem I see non-Mac people have when trying to use my Mac? They can't find the web browser because there's no IE or Firefox icon on the desktop.)
From brief use, though, Fedora's an unconvincing proposition.
The first thing I noticed is that 'Package Updater', the equivalent of Ubuntu's 'Update Manager', is really slow. It's probably just that their servers are slower than Ubuntu's, but it makes a difference to the end user.
I also found that at the moment, for example, I'm unable to install the updates on offer because it's unable to resolve dependencies. I don't know how typical this is of Fedora, but Ubuntu has behaved perfectly in this area. (You'll also remember that I failed to install the Parallels .rpm because a dependency couldn't be resolved. So either I'm really unlucky, or Debian-based systems still have nothing to worry about.)
Fedora's 'Package Updater"s display of 'update details' is exceptionally weak, too. It just shows the version numbers of the current and new packages. Ubuntu shows the package description (in case you don't even know what the package is) and the relatively readable changelog.
Fedora's 'Package Manager', equivalent to Ubuntu's 'Add/Remove Applications', is similarly unappetizing. It may have a GUI, but it's about as easy to use as dselect(1). In fact, it's very much like dselect(1). Ubuntu has something similar (but still better) in 'Synaptic Package Manager', but for simple use you don't need to bother with all that. I hadn't really appreciated Ubuntu's 'Add/Remove Applications' before, but I do now. Not only is it much easier to use and much faster, it also has stuff I'd actually want to install.
Fedora disingenuously talks about their Java development packages as if Java development stopped in 2003 and 1.4.2 was still the latest version. Worse still, they warn against installing Sun's RPM because Fedora's unfinished Java conflicts with Sun's package and 'Sun Java might disappear from an installed system during package upgrade operations'. Great. For Java developers and users of Java applications, Debian-based distributions are currently the place to be in the Linux world.
The one other program I got to play with is system-install-packages(1), the RPM equivalent of GDebi. Like the other tools, it's weak in terms of what it tells you about the package. GDebi is streets ahead.
On the bright side, Fedora 5 has a newer kernel than Ubuntu 6.06, and I'm told that if you care about NFS, you'd much rather have Fedora 5's 2.6.17 than Ubuntu 6.06's 2.6.15. SELinux might also be a consideration. Most of the GNOME desktop stuff is the same between the two.
Personally, I saw no reason to use Fedora as anything but a guest OS, and several reasons not to want to give up Ubuntu. read more:
I'm a pretty big fan of the new startup Riya, if you haven't seen it, it's a social networking web application that uses facial recognition software to create the social links. Basically you upload your photo's and the application detects the faces and you give each face a name, it's pretty cool, if not a little scary.
Anyway they recently announced that they were expanding the searching so that you could upload a picture of something and you would get back search results related to your picture. I think this is fantastic, I'm really surprised that Google with all their PHD's haven't come out with something similar.
I'm also annoyed that Microsoft haven't added anything innovative like this to the searching in vista, imagine being able to search your hard drive for something like 'joe' but then maybe something like 'joe outside' or 'TV', sure some of the inputs might not be textual but I'm sure that wouldn't be a major issue. We know that MS has some of this technology from it's research department, I remember watching a channel 9 interview, one of the researchers showed this DirectX application that had facial recognition and the ability to know if it was outside or inside, I guess this stuff ended up in the big black hole between research and production.
Well I hope Riya can continue to innovate and keep ahead of everyone else and please Riya think about creating an IFilter pulgin for the windows search system. I just keep thinking about how cool it would be to search my hard drive for specific items in a picture.
I've had the media center PC for over a year now and its changed the way we watch TV dramatically. But the other day I found something new. I always knew the media center could backup recorded TV to a DVD but I was surprised how the media center laid the DVD out. I fully expected the shows to run back to back with no DVD menu, but what I found was that the media center put a really cool media center themed menu into the DVD. I was really surprised and really happy that after all this time I can still be amazed at how cool the media center is.
Scala is different from other concurrent languages in that it contains no language support for concurrency beyond the standard thread model offered by the host environment. Instead of specialized language constructs we rely on Scala's general abstraction capabilities to define higher-level concurrency models. In such a way, we were able to define all essential operations of Erlang's actor-based process model in the Scala library.
However, since Scala is implemented on the Java VM, we inherited some of the deficiencies of the host environment when it comes to concurrency, namely low maximum number of threads and high context-switch overhead. In this paper we have shown how to turn this weakness into a strength. By defining a new event-based model for actors, we could increase dramatically their efficiency and scalability. At the same time, we kept to a large extent the programming model of thread-based actors, which would not have been possible if we had switched to a traditional event-based architecture, because the latter causes an inversion of control.
(There's not really a proper abstract. The above is from the conclusion.)
I enjoyed this paper. It's a quick read and a nice demonstration of some of Scala's cool features. It's also a good example of using exceptions as delimited control operators, and in fact the one substantial restriction is imposed by the lack of the more powerful operators. They use Scala's type system to reduce the burden of this restriction, however, since they're able to state that a particular statement never returns normally (and thus must not be followed by more statements).
Those interested in the language/library boundary will also find it interesting for this reason:
The techniques presented in this paper are a good showcase of the increased flexibility offered by library-based designs. It allowed us to quickly address problems with the previous thread-based actor model by developing a parallel class hierarchy for event-based actors. Today, the two approaches exist side by side. Thread-based actors are still useful since they allow returning from a receive operation. Event-based actors are more restrictive in the programming style they allow, but they are also more efficient.
They have some fairly impressive empirical scalability results as well.
So the season has ended; my team finished in third place. The final stats for the season are available online here. If you drill down into the stats, you’ll note I got no goals, no penalty minutes, and a single assist. Go me. ;) Still, I feel like I had a great first season and a lot of fun. The handful of pictures Jenny was able to get before the camera battery died are up in the gallery. I’m in black with a white helmet, #79. (I would have been #13, but someone else on the team already had it).
Being in the top 4 teams meant we got to enter the playoffs, which are a simple single-elimination tournament. Our first-round game was last night, against the 2nd-place team. Of our 4 regular defensemen, one was recovering from salmonella and the other had just gotten new skates (and isn’t really comfortable in them yet) since the steel runner in his old ones shattered during a game earlier in the season (seriously). The referees seemed to kind of have it in for our team; we took 7 penalties to I think 2 for the other team, although I honestly think there were an equal number of offenses on either side. We did score first, one goal in the first period, but the second period kind of fell apart on us and we gave up two power play goals.
Fortunately, one of the things this team is very good at is coming back from a deficit, and we really went to work in the third period. My defensive partner scored a beauty of a wrist shot off a faceoff during a 4-on-4, leaving us tied. We had several more great opportunities, including one where I cut off a clearing pass, passed to a forward at the side of the net, and ended up with the rebound and a wide open net. My shot was a little off balance and someone got in the way of it, and we ended up with about 8 players involved in a scrum in front of the net—most of us laying on the ice. The puck ended up right in front of me, and I saw our center about three feet away, standing up. I very carefully used my stick to push the puck towards him, but as soon as he touched it the ref whistled us for a hand pass (which was nonsense).
Time wound down and ran out; unlike the regular season, there are no ties in the playoffs, so we went to a 5-minute 4-on-4 sudden death overtime. We played for about 3-1/2 minutes, with some good chances on both sides, until a miscommunication ended up with us apparently having too many men on the ice. (I’m not completely sure there actually were too many men, but that’s what was called.) One of our centers, my defensive partner, and I ended up as the 3 in a 4-on-3. I was really excited to get tapped for the 4-on-3, since it indicated a lot of trust from my captain and teammates. Less than a minute later my partner got the puck in our zone. He had some time, so he held it, and the center started yelling for him to ice it all the way down. Instead, he very calmly passed it straight up the middle to the center, who fought off the lone defense in his way, broke in on the goalie, and beat him 5-hole. It was an amazing goal shorthanded in overtime, and the crowd (friends and family as well as the teams waiting to play the next game) started yelling. It was great. :)
So we’re into the championship game, which will be played next Monday. We’ve played our opponents three times: beaten them twice and tied once, so we feel like if we play well we have a great chance. Either way, I’ve had an amazing time this season and I’m really glad I got to play.
In other news, yesterday and today my company held the annual conference for our independent resellers, consultants, and other developers. This year it was held here in Austin. I was scheduled to judge an annual competition yesterday, but I had to bow out at the last minute because I needed to make it to my playoff game. Today I gave a 25-minute presentation on recent improvements to one of the tools we sell. It was very well received, although I have several notes on things I can do better next time I get the opportunity. It was very interesting to me to meet all these people who build an entire ecosystem of software based on our stuff, and hear their perspectives on how things are and should be. Being an insider was a new experience for me too; all these developers wanted to know what’s coming down the pipe, and how things work, and I have to keep in mind what I am and am not allowed to state publicly. It was very cool taking questions and being able to give good answers. Hopefully I’ll get to participate in the conference in years to come as well.
Sorry about the lack of updates. I got busy again. Or got lazy again—take your pick. ;)
I won’t recap the games individually, but our last 3 games have been a loss (a really bad loss; our captain got injured and one of our players got ejected for an unsportsmanlike conduct penalty), a tie (0-0; our first shutout), and a win (in which I played like crap; see below) respectively. Aside from the really bad loss (which wasn’t fun for reasons other than losing the game), the season so far has been lots of fun. I feel like I’m learning something every time I’m on the ice; I feel more confident every game, and I feel like every game I find some way to make a good contribution, even if it’s not scoring (and it’s not; I have 0 goals/0 assists on the season—but 0 penalty minutes, too ;). I actually kind of love blocking shots, partially because I find it incredibly frustrating when other people block my shots, and turning that around is very satisfying.
Anyway, I got really, really sick last weekend. I played hockey Sunday afternoon, felt great, had a great game (the tie, which we really should have won but for their substitute goalie playing like a pro), came home, had dinner, and then spent much of the night throwing up. Oh, and massively delirious. Apparently I was a real pain in the ass; I don’t remember much of it. Monday morning I felt slightly better but was weak as a half-drowned kitten and still shaky, so I stayed in bed all day. Wednesday, we played our next game, and I was very obviously still tired. I couldn’t seem to stand up on the ice at all; I fell down even more than usual (and usual is fairly often—my teammates call me the “Tasmanian Devil” for my signature spin-around-and-fall-down routine). Fortunately I don’t seem to have relapsed, and we have a 10-day layoff to the next game, so I have time to recover.
We just finished a major milestone at work. The project that’s been taking up the time of the majority of our developers for the last 3-1/2 years has finally come to a closure point, and from here on out we’re going to be starting to pull that code into our existing applications and actually shipping it to customers. This isn’t really new for me, because the stuff I’ve been working on has been the actual shipping applications, up until the last 3-4 months. Lately I’ve been getting things in place to significantly improve our infrastructure to make that migration easier; after Monday that will all be done (we’ll be using .Net 2.0, SQL Server 2005, and Team Foundation Server—all really cutting edge stuff) and I’ll be back working on actual code, which will be nice. I’m really a code monkey at heart. :-D
Some of the stuff I’ve been working on (and will be working on) is really cool; I’ll be helping present some of it at one of our annual conferences, aimed at our resellers and other people who customize our application. That’s very exciting for me; I often envy the situation of the various Microsoft developers who blog about their work and have very active, engaged communities, and this is kind of a similar situation for me. Although I’ve worked on applications for end-users at other jobs, this is the first time that I’m really starting to get directly connected to users/developers who aren’t actually on my team or working for my company, and that’s pretty cool.
Simply put, flash memory will enable a revolution in improving computer performance in daily utilization scenarios. Your computer will boot up faster. It will launch applications significantly faster. (Hey, it will shutdown faster as well.)
To see why we will have this dramatic performance improvement, let's remember how harddisks work: whenever you have a mixture of random I/O requests, the actuator moves across different tracks to read/write the corresponding data. Switching tracks is a slow operation. For an average SATA drive, this is around 9 milliseconds. This might not seem much, but a few milliseconds per seek means that you can have at most a few hundred random I/Os per second. And this feels like light-years compared with the performance of other components in the system like RAM access speeds or even CPU frequency. So, just to give you an example, a random I/O with 4 KB requests and average of 4 ms seek time per request would mean around 1000/4 * 4 KB = 1 MB per second disk transfer rate. Pretty small, don't you think? Especially when you compare it with sequential I/O, where you can get a much faster transfer rate (say, 60-70 MB/s on a regular harddisk, depending the rotational speed, data density, etc).
One trick to alleviate this performance issue is to minimize seek time by reordering writes and/or serving reads from cached memory. Memory caches can greatly help in this regard, but here is a little problem: applications, the OS, and other components do not expect writes to be reordered. When you a write reordering is detected at the application level, then a data corruption can appear, especially when you reboot the machine in the middle of performingg a set of reordered writes.
For example the applicaiton is performing Write(block1) followed by Write(block2) in one thread, and Read(block1) followed by Read(block2) on a different thread. In the sequence above, the application expects block1 to be written always before writing block2. Having this guarantee simplifies for example applicaiton recovery semantics, assuming that the computer can crash between writing block1 and block2. But if we perform write reordering, and only write to the disk block2, then our application recovery logic cannot be done in any way. And so we get to corruption.
Still, storage controllers perform today all sorts of tricks like maintaining a write-through cache in volatile RAM, coupled with limited reordering. More advanced controllers, or SAN equipment use persistent caches (battery-backed volatile RAM) to perform write reordering, complementing advanced storage features like RAID configurations, etc.
The solution - why flash is good
By now it should be clear how flash can be used in this picture: you can use inexpensive flash as a persistent write-through cache for reads/writes. Also, the fact that this flash is persistent enables reordering I/O requests at an unprecedented level, therefore greatly reducing our nasty seek time bottleneck:
The new 2Gb OneNAND chip doubles the capacity of a OneNAND memory device (from 1Gb) and increases the chip's ?write' speed from 9.3MByte to 17MByte per second.
?We're seeing a rapidly widening market for our OneNAND memory because of its outstanding performance and capacity that has become even more noteworthy with the application of 60 nm technology,? said Don Barnetson, Director, Flash Marketing, Samsung Semiconductor. [...]
Because of its exceptionally high performance, OneNAND can serve as a catalyst in the development of new product markets. A much-discussed example of this application-creating role is in how OneNAND memory is now being specified as the buffer memory inside a hybrid hard disk.
Samsung successfully demonstrated a commercial Hybrid-HDD prototype for the first time at the MS Developer Conference (WinHEC: Windows Hardware Engineering conference) in Seattle last month.
Flash-based I/O optimizations - already present in Vista
One more thing worth mentioning: Vista already benefits from Flash-based optimization. The feature is called EMD (External Memory Device), and can boost the performance of your computer by simply adding a USB thumbdrive and designate it as an EMD device. Under the cover, it works in a similar way with the technique described above.
Jason Levitt has been teasing me in our discussions on cross-domain requests about Yahoo's upcoming authentication API. The recurring problem: how to offer web APIs that can be mashed up but involve personal data? You want to allow for a large number of third parties to integrate with your services, but don't want phishing sites to abuse them.
Let me do a quick re-cap of the problem space before analyzing the pieces of Yahoo's solution.
Here is what is possible today for web browsers and what some people have recommended for the future:
cross-domain web APIs using script includes,
accessing web APIs accross domains using a web proxy,
In all these cases, there is no good authorization story, that would allow for working with personal data stored in the service in a secured way.
A number of techniques for controlling access to web APIs are generally used: user authentication cookies (or HTTP auth), API keys and crossdomain policy files.
The problem is that API keys and crossdomain policy files are too restrictive because the service needs to decide which third-parties to let in.
On the other end, access control based on the user authentication cookies are very open to un-planned integration, but also create a huge phishing risk. This is a classic example of the confused deputy problems that appear in principal-based security models.
As a result, most web APIs today don't involve any user data (search, maps, ...) or non sensitive user data.
Yahoo appears to be tackling the challenge with its announced 'browser-based authentication' (bbauth). From the little information I could gather so far, from Drew Dean's slides, it seems less of an authentication than an authorization system. Unlike cookie based approaches, which give access to any agent presenting user credentials (principal-based security), it appears to follow a capability-based security model, which only grants access if the agent uses the proper 'secure handle' or 'capability' to call the service. Such capabilities are sufficient to gain access to the service and don't need any additional authentication, they are communicable tokens of authority.
Let me re-iterate that I don't think this protocol is about Identity, unlike Passport, TypeKey or CardSpace (aka. InfoCard), but rather simply authority and access. This characteristic is important: we want services to cooperate without being tighly coupled at the identity level. Drew Dean's slides frames the issue as allowing 'Pseudonymous delegation of partial rights', which means the names of a user in different services don't have to match and the authority that is granted is granular.
What's great about this model is that the authority carried by a capability can be as granular as the design and scenario require, and are only be given out to third-parties under certain conditions, which again are chosen to fit the desired requirements and user experience.
For example, the authority granted could vary in range in action and scope: a handle could give access to the user's entire data, or maybe only partial access to part of the user's data. The design of the capabilities could also comprise additional dimensions, such as a time restriction. For example, a capability could be only valid for 24 hours. One of the myths of capability systems is that capabilities cannot be revoked. It is actually possible and in Yahoo's design, any granted authority can be revoked by the user at any time.
One common policy for giving out capabilities is to get consent from the user. The screenshots of the F-Spot integration with Flickr (found on this thread) show the Yahoo consent UI. Although I don't like the desktop/web integration in this scenario and I have some concerns about repeatedly prompting the user for consent, I believe that this approach has a lot of potential for cross-domain service integrations on the web. Cross-domain support in browsers will be the main remaining link missing to unleash some really cool web apps. In the meanwhile, you can use FlashXMLHttpRequest or some other cross-domain workaround.
I look forward to reading the documentation when the protocol is released and trying out the resulting user experience in practical scenarios. Let me know if you find any other information. Jason mentioned that the protocol is open and can be simply implemented, which means that it could be supported by other services and hopefully used in a wide variety of mashups.
Thanks again to Jason for his interest, feedback and support. I'm pretty excited to see what cool stuff he'll cook using this and the new web APIs from Yahoo.
As you can see in the demo/index.html file, after including dojo.js and FlashXMLHttpRequest.js, you'll need to initialize dojo and the flash object by calling InitFlash with the name of a function. That function will be invoked once the flash object is loaded and ready to make requests. From there on, you can create FlashXMLHttpRequest instances and use the 'open', 'onload' and 'send' methods almost as you would with a regular XMLHttpRequest object. You can also call 'setRequestHeader', but only to set the content type request header.
More generally, FlashXMLHttpRequest still has some limitations, due to the native Flash capabilities. First, access to other domains is restricted by use of a crossdomain.xml file. Second, you can only make GET and POST requests. It will become possible to support other verbs, such as PUT, DELETE or HEAD, with the new APIs provided by Flash 8.5.
Eric Sink duly noted, (to paraphrase liberally), that it's probably a bad idea to hire a zealout. Yes, I said zealout, he didn't. He said hire a profesional, and I think the moral of the story is, when remotely possible, to hire a hacker who is a professional. Hopefull those who read his article will read more than the last paragraph, and thus not conclude that hackers cannot be professionals.
Then what is a profesional hacker? (cue music) Passionate, but not militant. Expressive, but not zealous. Aggressive, yet adaptive. Smart, yet empathetic. Able to type-cast, yet dynamic. Can follow procedures, yet functional. Uses source control and bug trackers but makes it PERTy only when necessary. Results and collaboration. And finally: Proud, and not condescending.
Yes, I lean towards Eric's perspective yet have been influenced by more respectable (and just plain cool) hackers in my life to let potential short-sighted conclusions go unnoticed. Tact is a necesity and will go a long way. I've met as many tactless hacker consultants as I have smug 'GPL is leprosy!' bandwagoners (sometimes both!) and since they both negate each other out of existance I'm (profesionally!) content with a product-shop wife with a hacker mistress.* I admit to having read the Great Hackers through nolstalgic eyes and generally feel most product-shops miss the point in regards to leveraging the hacker identity. My guesstimate would be that they only go as far as ThinkGeek, or worse, Despair.com. Frankly I find the term hacker about as saturated and misused as engineer. Funny how both hackers and shop-grinders like to be recognized for what they contributed to the community. 1:46AM. I digress.
*Wish I could remember the Pythonista who said 'Java is my wife, Python is my mistress' in some comment thread...here it is. Values of Cool, indeed.
Perhaps I should add a link to the Slashdot comments, but I've found Slashdot's Read More... considerabley harmful.
For those wondering about the title, I enjoyed this book. Not that it exemplifies tact, and too bad G and J are only phonetically similar, but I'll stick with it. I wonder if Vonnegut is required reading for Comp. Sci. majors. It would likely get read, but would it help?
Daily Journaling Notes <p>Jeremy Hylton <ahref='http://www.python.org/%7Ejeremy/weblog/031009b.html'>notes</a>the importance of keeping good notes, along with a good chunk of soundadvice from the article <ahref='http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=69'>CodingSmart: People vs. Tools</a> by Donn M. Seeley, which I finallygot around to reading. I'd love to hear his review of <ahref='http://www.cvstrac.org/'>CVSTrac,</a> and whether it'sfeasible <a href='http://zwiki.org/CVSTrac'>in a Zopescenario</a>.</p><p>Ed Taekema has <ahref='http://www.pycs.net/users/0000177/weblog/2003/10/16.html'>noted</a>the use of wiki's in this area, along with several relatedarticles.</p><p>'Coding Smart' felt slightly dogmatic, but appropriately so.The initial <em>watch out--that new languages may kill yourdevelopment time</em>bugged me, as it's one of the reasons I've used to avoid writingtesting apps in Python and redundantly tested manually. His otherpointthat the new language may not be transferable to other workers I feelis a non-issue with Python. Idioms. Ouch, they are the differencebetween 'Pythonic' and 'Python', eh? I admit here I was setbacked. Wasthis general purpose programming language was leading down anotheresoteric path, reducing my psuedocodish Python to n00bish dribble or<ahref='http://www.enac.northwestern.edu/%7Etew/archives/000072.html'>unreadable</a><ahref='http://www.python.org/doc/current/tut/node6.html#SECTION006750000000000000000'>shortcuts</a>?(*What cocky programmer wouldn't claim to be able to pick up anotherlanguage in a few days, let alone <ahref='http://fishbowl.pastiche.org/2003/10/08/sig_quote_of_the_day'>debugit</a>? ) </p><p>Later Seeley moved into 'use psuedo-code', which I dug butraised aneyebrow to, Given Python's readable code and experiment and validationcapabilities. Python does add a lot of value here, and of course noteverwhere. I'd like to tie in frustrations of trying to communicate inuser stories when peer coder speaks in schema tweeks and algorithms,orthat generally good problem statements inherit the solution, butwon't.Peer reviews are also cool when common values are apparent andpersonaldogma is chained.</p><p>Outlining. +1. This is what <ahref='http://www.pycs.net/sqr/2003/09/24.html%20'>I use Word</a>for.</p><p>My college business professor (and many others) repeatedlyrecommended keeping a profesional journal. I've never been able tokeepup a paper journal. My <ahref='http://zwiki.org/PersonalWikiExplanation'>personal wiki</a>isprimarily 'notes to self' and similar links. Good notes in change logsare critical, and prefer then to be tacked within the source, as a'blog within a file' type of thing. Code diff's are no replacement,andare far from convenient--<em>especially</em> when trying togrok code and it's hi<em>story</em>. My personal wiki has a<a href='http://webseitz.fluxent.com/wiki/FrontPage'>blogapsect</a>with entry form on the front page. This causes more blog and lesswiki,but without it the time it took me to figure out where to put theresource would derail many of my intentions.</p><p>Thought of the day: When folks complain about documentation,what part of knowlege management has failed?</p> read more:
Python and X10 Home Automation, Part 1
Python and X10 Home Automation, Part 1
I recently saw an ad from x10.com for a free (you pay shipping) X10 starter kit, including a 'Firecracker' computer interface. That was a deal I couldn't pass up, so I ordered it through their web site, and 3 days later, the kit arrived.
The kit consists of the CM-17a 'Firecracker' serial computer interface, which transmits via radio, a transceiver module which receives the radio commands from the Firecracker and retransmits them via the X10 protocol over your house wiring, a lamp modules for controlling... lamps, and a PalmPilot-sized hand-held remote control that lets you manually do what the computer interface does. Oh, and the transceiver module also double as an appliance module, allowing you to control appliances of up to 500 watts.
With the hand-held controller, you can control any X10 modules you have, either the ones that come with the kit, or any add-on modules you may want to buy. You could go wild, like many do, and completely automate your home -- lights, appliances, garage door, pool heater, ferret feeder, whatever.
But with the computer interface, things get much more interesting. You can, for example, download from x10.com a free application that duplicates the appearance and functionality of the hand-held controller on your computer screen. Or, you can download, for $20, an application that fully utilizes your computer and the x10 interface to do full automation. Want your hot-tub to turn on at a certain time every day? No problem. Want your lights to simulate an occupied house while you are on vacation? Easy.
Naturally, hand an X10 computer interface to a Python programmer, and he'll immediately start writing code for it. Or that was my intent, anyway. The first thing I did was google around for any existing Python projects for X10. I found two, Pyxal and Pyx10. Both projects seem to be unmaintaned. Pyxal is pure Python, and does not support the recent X10 controllers, like the Firecracker. Pyx10 uses a wrapper to turn the XAL library into a Python extention module. It supports recent X10 controllers, including the Firecracker.
I downloaded and examined both. Pyxal was right out, as it has no Firecracker support (why not add it yourself, you ask? I'll get to that in a moment...). Pyx10 and XAL looked good. After compiling and installing XAL (a snap), I tried compiling Pyx10. Nope. The wrapper code for XAL would not compile. From a quick exam, it looked like it was out-of-sync with XAL.
I could have continued hacking at it to get it to work, but further googling (the trademark police are gonna get me), I found Project WiSH, a project for turning X10 device drivers into... well, Linux device drivers. Super! Instead of having to do low-level device handling from my code, I can simply open a linux device driver and write commands to it, just like I was writing text to a file. And WiSH was a snap to compile and install. Just make sure you have your kernel source loaded on your machine. (For the CM-17a 'Firecracker', be sure to download the 1.6.10 version of WiSH. The later 2.0.1 version does not yet support it. But both versions support the CM-11a, which is the other modern popular X10 computer interface controller.)
Now, I do my work under Linux, so this is just what the code doctor ordered. Actually, it's even better than it sounds. You see, there's this little bit of info about that Firecracker X10 controller...
If you look at one of the other X10 computer interfaces, say the CM-11a that comes with another of the home automation intro packages that x10.com sells, you will see that it is controlled via the computer in a manner rather like an external serial modem. Connect it to your serial port, and send it strings of ASCII characters. Not so with the CM-17a 'Firecracker'. This little guy is a serial pass-thru 'dongle', very small. From what I can tell from my Google research, you must directly control the radio transmitter in it via bit-tiddling the RTS and DTR lines of the serial port. You must assemble a 5-byte command via bit masking, then bit-shift it out to the CM-17a by directly controlling the states of the RTS and DTR lines, doing the timing yourself. There are no smarts. Ouch. No wonder this is the bargain-basement controller.
The CM-11a controller has another advantage, too. It's smart, it has its own processor. So you don't even need to leave your computer on to do real-time home automation. Use the scheduling software to send it commands, like 'turn on my security light at local-time dusk, and turn it off at dawn', and the CM-11a will do it, all by itself.
But I don't have the CM-11a. I have a CM-17a and a Linux box. Add in the device drivers from Project WiSH, and from a Linux command line, I can execute 'echo 'on' >>/dev/x10/a1', and send the 'on' command to the X10 device at house code 'A', unit code '1'. How cool is that?
OK, how can we combine equal portions of X10, Project WiSH, Linux, Python, and fun? (OK, fun gets a bigger portion.)
Here's the deal. I work for a major software house. We do automated nightly compiles of our code on all of the platforms we support (Linux, various flavors of UNIX, Windoze). The last thing you want is for some code change you made that day to 'break the build'. The automated process sends out email giving that night's build status. If you broke the build, it's supposed to be your first priority to fix it.
I keep forgetting to check my email. I have many projects, they grab my attention, and it may be hours before I check my mail. Yes, I have a little task bar thingie that tells me if I get new mail. I don't look at it if I'm concentrating on a problem.
Python and X10 to the rescue! (This is a fun solution looking for a problem.) I now have a Python script that is run via cron every 10 minutes. It uses the poplib and email modules to grab and parse my email, looking for the specific patterns that a 'you broke the build' message will contain. If it finds such a message, it opens and writes an 'on' command to the proper X10 device driver, which then turns on the BIG RED ROTATING LIGHT. I kid you not.
The new PodShow+ site, unleashing pretty darn soon, has a personal bio feature called 'The Legend of me'. I just filled mine out. Here's what I wrote:
Pretty verbose --- it fills the alotted space on my profile page --- yet it barely scratches the surface.