Tuesday, November 11, 2008

A thought experiment on the end of Humanity

Pretty slick title huh? Thought of it myself.

My girlfriend used to be a high school teacher. She told me an interesting fact - she said one of her bigger issues was that sometimes when a student handed-in an assignment, it wasn't uncommon for it to be something the student simply found on the Internet, cut-and-pasted into their word processor and handed it in. If you're like most people I tell this too, you're a bit unhappy about the laziness of these students. Instead of learning something for them self, they simply used Google to find it, and (effectively) recite what they found. Pretty weak, eh?

Einstein was once asked how many feet are in a mile. He replied something like "I don't keep information in my head, when I can just open a book up" (I googled that). Einstein apparently didn't have google.

Funny thing is that when Einstein was alive he'd look up simple facts (i.e. 5280 feet) in a book. Ten years ago we had the ability to look it up on our computers. Now I can look it up on my phone that's with me at all times. What do you think is next?

Let's say that what's next is mind interface to the net. Surely, this isn't a new idea and people are working on this right now.

But think a second - what happens when we have instantaneous access to the Internet without moving a muscle. If you ask me how many feet are in a mile and I answer - you won't know if I knew it, or if I "looked it up". And at that point, it pretty much won't matter. If it takes more effort to memorize it (to my real memory) than it will be better and faster to just leave it on the Internet and grab it there whenever I need it.

Like all technology this promises to have its glitches at first - but eventually, it will be pretty reliable. And what then? Well, if our minds work like our flabby bellies, then our human memory will atrophy. We'll slowly but surely lose the ability to remember things.

We tend to describe the idea of "knowing things" as wisdom. And we tend to describe the idea of "figuring out things" (like math or connecting disparate concepts) as intelligence. A way to distinguish this is that you can be born intelligent, but you can't be born wise.

Tomorrow's Internet has the potential to fully replace wisdom. We won't be any less wise - in fact, we'll all be instantaneously super-wise. And equally-wise (which may be weirder than being super-wise). Even children.

If you think this is crazy - I argue its already happening. Those kids in my girlfriend's old class already find memorizing things to be more effort than simply googling it. As soon as they get a faster interface to that information, they'll take it.

Most people that disagree with me on this don't actually disagree, they simply fear it. It does spell a fundamental change in humanity - and that's rather frightening. Surely things will change fast. At a minimum, all business that relies on hiding information will be, ya know, gone.

But it doesn't end there.

If we all gain super wisdom, then the only mental differentiation between us is intelligence. How fast can you multiply two numbers? How many times must some explain particle physics to you before you get the relationships between the elements involved?

The first computer beat the first human chess grandmaster in 1998. We pretty much always associated chess with intelligence, but chess is actually a pretty unfair example. Humans approach chess abstractly. In some sense considering the board as a whole, processing it in parallel, and extrapolating opportunities from it. Computers work far differently. They simply examine every possible combination (with some smart algorithms to not examine useless moves) of the game from this point forward. Chess has so many possibilities that it took awhile for computers to get fast enough and computer programmers to get clever enough to search enough possibilities to beat a human.

Computer "intelligence" is likely farther off than computer "wisdom". But you're fooling yourself if you think it isn't coming. The human brain is in essence, just a machine - damage it and it stops working. Give it alcohol and it gets off kilter. Computers will reach it - maybe not computers as we know them, but computational machines will. Ray Kurzweil predicts this sometime in the 2020's or so (per the book I read anyway, he might have changed his estimate - incidentally, he predicted computers would beat a chess grandmaster in 1997 - he was off by a year).

So what happens then? To us I mean.

By that time we will have farmed out our personal memory long ago. And then, we'll start farming out our thinking. We already happily do this with calculators or spreadsheets. We all know computers kick our ass when it comes to math. Who wants to do long division anymore? Let the computer do it. We've already farmed that part of our intellect out. If you told me I could get a chip put in my head that let me do all math instantly, I'd sign up for sure.

What happens when computers can do more? I mean, literally think for us. It won't happen overnight. But just like long division and multiplication today - we'll do it little by little. As computers get smarter and smarter, and as our interface to them gets faster and simpler, we'll slowly but surely, give them our thinking tasks.

And just like the dumbification of our kids today - and just like our fat bellies and long atrophied human memory, our unused thinking capacity then gets lazy too.

What happens then? Seems like, in some sense, we sort of cease to be as we know us. We become conduits to some consciousness we created elsewhere. You can call this extinction, paradigm shift, or apotheosis - it probably doesn't much matter.

I'm not smart enough to know what happens in this borgian future - but I have a feeling, that in 20 or 30 years, I sure will be. And so will you.

Kurzweil is a great read on ideas of the future:
Age of Spiritual Machines

Monday, October 27, 2008

Never send your Application out alone again

Anyone that reads my blog knows I was one of the founders of a company called Preemptive Solutions, Inc. Preemptive was my first startup and has a very been very successful in evolving its DashO and Dotfuscator product lines. Those products are near and dear to my heart as the initial incarnation of DashO was spawned from my Ph.D. dissertation work. People often now associate it as a Java Obfuscator, but to me that was largely an afterthought. Its a static analyzer of an interesting sort for Java. I had originally set out to write a Java bytecode optimizer, but I quickly realized that given Java's nature, you fall pretty flat on how and when you can do static analysis.

Java (and .NET) have very many dynamic components (the more you look, the more you tend to find). My dissertation (http://portal.acm.org/citation.cfm?id=1087610) was most interested in a scheme to identify closed systems inside of Java applications. Basically cordoning off open hooks into Java applications and not trying to optimize across them. In essence, it identifies sub-applications within Java and .Net applications and optimizes those one at a time.

This is all well and good, but its basically old news.

Whats new news is Preemptive's new product line. As Preemptive evolved those static analyzers, they got really really good at instrumenting code. In fact it became second nature. Preemptive's new product line takes advantage of this know-how.

They call it "Runtime Intelligence" - I call it fricking cool.

I explain it like this... In past decade application servers popped up to provide an application services cradle for which you to drop your business logic code into. Basically, you write a nice little piece of business logic, put it into an application server, and that server took care of all boilerplate details like database access, fault-tolerance, load-balancing, etc. Its a silly idea to think every website had to write code to handle these generic ideas.

The heyday of application servers isn't what it used to be as many discovered that they added a ton of overhead for applications that weren't using all those shiny services. In fact, new web or server frameworks pop up all the time to literally "part out" application server functionality. You used to ask someone their server infrastructure and they'd say "Weblogic" - now its not uncommon to hear "Spring, Hibernate, Struts2 and GWT".

Runtime Intelligence is sort of the inverse of the idea of an application server. Simply put, Runtime Intelligence allows you to "inject" prebuilt functionality modules into your application. So, does your super application need licensing? Does it need to update itself through patching? How about statistics on how people are using the product?

You got it.

Write your code like you planned to without worrying about "generic" functionality pieces and "inject" them later.

Whats more is that Microsoft has bought into this idea in a big way for .NET. Microsoft today announced a joint press release (See it here) with Preemptive at PDC about this product. If you're familiar with how Microsoft works, you know that joint press releases are pretty rare. Its clear to me that they "get it". This is big.

Preemptive's first round of injectable functionality is pretty slick too. Basically a statistical package for your application. Like web analytics, only for applications. Ever implement a feature and wonder how many people really use it? How about finding out that 33% of your customers never get past the 2nd page on your wizard (maybe its design is too confusing?) How about finding out that 68% of users tend to do 5 features in the same order everytime - and you could easily add a new feature that does that for them. Simply put - releasing an application into the wild unknown and "guessing" how users use it is a thing of the past.

Surely, this kind of application monitoring makes a lot of sense for certain types of applications (of course, its not right for every application just the same way other boilerplate functionality like licensing or patching would or wouldn't be). Either way, this opens up all new possibilities in development planning for applications. I expect plenty of meetings between sales teams and project managers discussing this data.

Some people have compared this to aspect oriented programming, but that's quite inaccurate once you look deeper. I've used both and aspects feel like a sledge hammer (and at least for the packages I've used, an annoying-to-configure one at that). Runtime intelligence is surgical (as far as I know, no one is doing feature stats with aspects). You write the code and inject real business boilerplate functionality anywhere and anyway you want.

As you can tell, I'm really excited about this - and I'm doubly excited that Microsoft is on board with it too. If you're going to PDC this week, definitely check these guys out.

Disclaimer: I have an unhealthy crush on this company. It has gone farther than I had ever imagined and I'm continually impressed of their accomplishments and future.

Thursday, August 28, 2008

Probably the hardest sales job ever

The VP of Marketing of my old company is an absolute master at analogies. One of my favorites was when he described to me the idea of selling a new product that creates a brand-new niche - often a very difficult task.

He likened it to the first person that had to break ground selling thermometers. Not the "How's the weather" thermometers, I mean the "Do you have a fever thermometers". Surely nowadays these are digital little gizmos, but when they came out they were the old-fashioned mercury based ones.

I can imagine the sales-pitch:

Customer: So, whats it good for?
Salesman: It will tell you your temperature.

Customer: Why do I care about that?
Salesman: Well, then you will know when you have a fever.

Customer: Um. I already know when I have a fever.
Salesman: Yeah, but now you'll be sure.

Customer: Erm.. k.. What's it made of?
Salesman: Glass

Customer: Whats that stuff inside it?
Salesman: Mercury - careful, its toxic.

Customer: How do I use it?
Salesman: You just put it in your butt for 2 minutes.

Customer:
Um. So basically, you want me to take this toxic-substance filled thermo-thing made of breakable glass, stick it and leave it in my butt for 2 minutes so that I'll know something I pretty much already knew.
Salesman: Yepper.

Customer: Awesome - I'll take 2 !

That had to be a hard job. Solve a problem that was perceived as not needing solving and then do it in a new, dangerous, and highly uncomfortable way. And you thought software was hard.

Monday, March 17, 2008

A few ideas about Negotiation

A good friend of mine asked me for some negotiating tips. This is what I told her. Use, agree, or disagree with them at your own risk.

1) Never put numbers in email. Email lives forever. Numbers are only discussed on phone or in person. Only written down when you're signing the contract.

2) There's an old saying "Whoever puts the number on the table first - loses". In general, this is good fallback advice.

I modify this according to several factors:
a) The less you can predict the outcome, the more likely I let the adversary say the first number. (i.e. revenue-less company valuations are often voodoo - its quite possible your buyer will give a higher number than you ever imagined).
b) The more I need a deal, the quicker I am to say this first number. This sets a tone.
c) Conversely, the less I need a deal - I'm willing to let them show me just how bad they want it. The danger is if they give an extreme lowball, I need to be able to walk.

3) No matter what they offer, ask for more. How forcefully depends on how good the deal already is. If they offer you 10% when you were expecting 3, meekly ask for 12% and back down fast if needed. If they offer 1%, strongly go for 4% and settle for 2.5.

4) Don't answer the phone if they call to discuss the negotiation. You are probably thinking about the chicken mcnuggets you just ate and they have been thinking the last 20mins how the phone negotiation will go. In short - they are prepared, you aren't. Let them goto voicemail. Wait an hour.. spend 10 minutes focusing on the possibilities of the negotiation and call them back. Their mind will be elsewhere now. You'll be ready.

5) Seriously - don't ignore #4. Fifteen seconds is just not enough time to swap your mind into the right context. Besides, information they leave in the voicemail could be advantageous.

6) (Unless you're reading this and you end up negotiating with me - then we might as well set a time in the future to chat otherwise we'll never answer each other's calls.)

7) *Everytime* you sign a contract, you are giving up something. Take a step back and make sure you fully understand all that you are giving up - and all that you are receiving in return. Never sign a contract (or sleep with someone for that matter) because you feel bullied into it.

8) A common negotiating tactic is to put your adversary in an uncomfortable situation. The hope is that the adversary will compromise some just to relieve the discomfort (the more experienced the negotiator, the less likely this is). If you can, reverse the discomfort instead. (This is a class used-car-salesman tactic - think "But you told me yesterday you were going to buy this car!")

Surely negotiation is an art and there's plenty more to it. These ideas are at best a few tricks and tips. Negotiation is a dance - you can't exactly know what you'll have to do until you are forced to react to what your partner does. Thus just like dancing, practice does wonders for your skill.

Wednesday, March 05, 2008

Writing Java Multithreaded Servers - whats old is new (PDF Slides)

I'm giving another talk tomorrow at the SD West conference:

Here are the slides
Thousands of Threads and Blocking I/O: The Old Way to Write Java Servers Is New Again (and Way Better)


I've encountered some very strong misperceptions in the world that:

1) Java asynchronous NIO has higher throughput than Java IO (false)
It doesn't. It loses by 20-30%. Even with single thread against single thread. If multiple threads enter the equation (and multiple cores) which of course blocking I/O is intent on using - its skews even farther.

2) Thread context switching is expensive (false)
New threading libraries across the board make this negligble. I knew Linux NPTL was fast, but I was quite surprised how well Windows XP did (graphs inside notes).

3) Synchronization is expensive (false, usually)
It is possible for synchronization to be fully optimized away. In cases where it couldn't it did have a cost - however given we have multicore systems now its uncommon to write a fully singly-threaded server (synch or asynch), in other words every server design will pay this cost - but, non-blocking-data-structures ameliorate this cost significantly (again graphs inside show this).

4) Thread per connection servers cannot scale (false)
Thats incorrect at least up to an artificial limit set by JDK 1.6. 15k (or 30k depending on the JVM) threads is really no problem (note linux 2.6 with NPTL using C++ is fully happy with a few hundred-thousand threads running, Java sadly imposes an arbitrary limit). If you need more connections than this (and aren't maxing your CPU or bandwidth) - you can still use blocking IO but must depart from thread-per-connection. Or fall back to NIO.

I'll try to spruce up the benchmarks I used and try to post them. I'd like to point out that writing Java benchmarks is very hard. I spent a great deal of time making sure I warmed up the VM and insured there were no positional biases or other overzealous or skewing optimizations.

I was then *extremely* lucky to get help from Cliff Click of Azul systems (if you want to write a benchmark, a VM engineer is the right kind of person to get help from). He spent half a saturday tweaking my benchmark in ways I never thought of. Then ran them for me on his 768core Azul box (graph inside)!! thanks Cliff !

Sunday, March 02, 2008

Notes for my SD-West talk tomorrow on Interviewing in Silicon Valley

I'm giving a half-day tutorial tomorrow at SD-West 2008 in Santa Clara on How to Pass a Silicon Valley Software Engineering Interview.

Its the first of 3 talks I'm giving this week (subsequent notes to come subsequently).

You can download the slides Here.

If you're not attending the talk, please note that as with all slide decks, they are in a sense only half the story - as I'll be filling in many pieces during the lecture itself.

This is the third year I've given this talk and I'm amused to mention that I got a thank you a few weeks back from a veteran SD speaker that attended last year's class and now is just starting his new job at Google. He said the class was very helpful (he also offered to buy me dinner, but given that dinner at Google is free, I countered and offered to buy him dinner instead :)

Sunday, February 24, 2008

Customers are the most honest people you'll ever meet

My first startup was Preemptive Solutions, Inc., where apart from other activities, I turned the core of my Ph.D. dissertation into a Java bytecode optimizer called DashO. It was named after the javac (and gcc) command line option "-O". I thought it was a dashingly clever name at the time (The idea of "hypen-O" seemed nowhere near as cool).

In hindsight, it was a pretty dubious product idea. Developer tools are a tough business. For all the talk of application performance, people don't often pay for it except in the form of bigger hardware. However, as I came close to completion of the code, the idea morphed itself into something much more viable (as startup ideas are wont to do).

Turns out people weren't willing to pay for performance so much, but at the time, Java applets were taking off. And people were dying on applet download times - making applets smaller became a component of business success. A wonderful side effect of my code optimizer was that it also made code smaller. And with a few added features focusing on that, it made Java applications amazingly smaller than the original. Getting a 50% size reduction (mostly via bytecode manipulation, dead class/method removal, and identifier renaming) wasn't unusual. The product was a hit.

As time went on, applets gave way to Java ME - and source code protection was added later - but the idea of small code prevailed. If your Java ME application doesn't fit on the phone, then you really can't expect to get many users.

While writing that code, I had just finished writing the book Java Primer Plus. Honestly, I thought I was a pretty crackshot Java programmer. As time went on, of course I kept learning. There really is a teenager phase in your lifecycle of learning a language. It's a distinct point where you're convinced that you know it all. As Mark Twain said (summarizing), "When I was a boy of 14, my father was so ignorant I could hardly stand to have the old man around. But when I got to be 21, I was astonished at how much the old man had learned in seven years."

In your post-teenager phase, the biggest thing you often learn is how little you actually knew. Thereafter, coding in the language moves from your brain to your brain-stem and finally to your fingers. I can palpably tell the difference. When I code in a language I've done for more than a few years, I don't have to think about the language at all. My brain does things like data structures, concurrency, and algorithms whereas my fingers do the coding.

It was about the time that DashO made its first million in sales that I really sat down and realized how bad the underlying code was designed. My deficiency in Java when I had first coded the app was now obvious to me throughout the code base. The first thing that struck me was that I had made nearly every method in the damn application static. Talk about a C programmer moving to Java. I remember being convinced it would make things run faster (of course I probably never tested that given I was so sure of myself).

I remember thinking that it almost seemed wrong that such bad code could be so successful. But it was. To be fair it was quite bug free and actually did do what it advertised. Its success was that it consistently brought value to its users - and they were more than willing to pay for that. They really didn't care if every method was static or if I was bubble sorting my way until Tuesday - if it shrunk their J2ME application by 50% that was good enough for them.

Customers are honest. They vote with their credit cards and their attention. I've heard of more than a few startup launches be delayed because they "needed to rewrite some core pieces". Clearly, rewriting your code is essential at times, but at other times I've seen it be a feel-good technical decision and a downright bad business one. Quite often its like polishing your car's engine. It might make you feel better, but the car won't run any different and no one else is really going to notice when you drive down the street. Simple moral is that if you're going to delay your product launch because of a rewrite, just be sure its worth it. Delays have been known to be fatal.

These days, I don't get to visit Preemptive as often as I'd like but I'm happy to report it is a very successful 25+ person company, still growing, and is moving into some very exciting (to me anyway) new product directions. A new set of developers work on DashO and it continues to grow in both features and users (and happily, they've evolved the code-base into something far more reasonable to maintain). I'm still amazed at how well the application sells, but I guess I shouldn't be. As long as DashO keeps bringing more and more value to its customers, it will remain a successful product. And things like where-you-went-to-school or how static you decided to code your methods be damned. Value is value.