Today I stumbled across Rands’ article praising simple design and criticizing the multitudes of options most products, especially software products, seem to offer us. Considering I write and code only in VIM, compile Linux on all my computers from source, and don’t even have a GUI of any kind on the computer I’m using to write these particular words, it should be clear where I stand.
While great to a point, I find the idea of extreme simplicity in product design ultimately unhelpful.
Okay, first, lets get some things out of the way where we agree. Is there a lot of software out there that is unnecessarily complicated? Yes. Does radically simplifying a user interface and providing intelligent defaults lead to wins for everyone? Absolutely. Did anyone really want separate “Apply” and “Save” buttons in their option dialogs, instead of everything saving automatically? Hell no.
What’s my problem then? Years of relentlessly following this road lead us to a world where everyone has devices with beautiful form factors and intuitive interfaces, but all their functionality is encapsulated in small, incompatible, single purpose apps.
This line of thinking kills general purpose utilities, and it doesn’t allow for those magical tools that can do things their creators never envisioned.
It kills creativity and the wonderful feeling you can only get from creative tasks where there are no limits to what you can make.
Overly minimalist, single purpose products enable and promote consumption, not creation.
Hopefully I’m wrong, but while there have been millions (even billions) of people in recent years experiencing computing for the first time with new mobile phones, it doesn’t seem that people are seeing these devices and thinking of anything beyond the immediate uses that a few apps can give.
Despite having more computing power than we could have dreamed of just a few years ago, wireless connections to the internet wherever they go, and a suite of hardware like accelerometers, excellent touchscreens, and GPS receivers that have never before existed together on any computing platform, our smartphones are nearly useless for creating anything beyond selfies and status updates.
I don’t think this is intentional. No one is actively trying to discourage creation.
But we may soon be entering a strange time where computing is ubiquitous, but the tools needed to program them are not.
With desktops and laptops, we had both, and really, it was accidental.
Every home had a computer, and no matter what it was purchased for, every one of those computers had the power to create amazing things.
As someone who discovered programming largely on my own on a computer originally meant for playing games, I can’t help but think our industry or society in general will be missing something when all we have are locked down smartphones.
So what should we do?
With the possibility that fewer people will discover programming on their own, we’ll have to be even more deliberate about recruiting them.
We’ll have to take every opportunity to remind people, especially young people, that the little device has amazing and limitless power to create.
That the people who built, designed, and program your phone are no different or smarter than you, and that you can do it too.
Months ago, after watching the Nth amazingly beautiful timelapse of San Francisco made by someone else, I realized I had all the tools to do the same thing.
I have a Panasonic DMC-GF1 which is, if not the world’s best camera, perfectly capable of taking beautiful shots at all hours of the day.
I have a respectable ability to write shell scripts to simplify mundane tasks.
I have the perseverance and willingness to slog through man pages, google searches, and stackoverflow posts to figure out how to use one of the many video editing tools available.
I didn’t have a remote shutter with repeat mode, but I bought one on Amazon.
I figured this would be everything I need to make a timelapse video, and I was right.
Making a GOOD timelapse however, takes a little trial and error.
Let’s start from the beginning.
Timelapse 1
January 17th
Armed with my camera, an 8GB memory card, a tripod, and two fully charged but aged batteries, I got started.
I pointed my camera out the window, spent 20 minutes reading the remote shutter manual (boy was it terrible), and walked away.
Shooting in RAW mode(like any good photographer of course!), at 14MP per shot, my camera ran out of memory only 4 hours later.
At 15 seconds between frames, and 25 FPS in the final video, that’s just 19 seconds of video.
Not much of a timelapse.
Fortunately, I was closely watching the status of my camera as it ran out of room and promptly switched in another memory card and battery.
Unfortunately, to swap the battery and memory card I had to take my camera off the tripod, and of course you can never get it on in exactly the same way again.
So, 19 seconds into the video below you’ll see the view shift, but I was able to double the length of my timelapse to 8 hours of real time and 38 seconds of video.
The next challenge was converting a bunch of RW2 files (Panasonic’s raw format, which isn’t well supported most places) into a video.
As it turns out, converting a bunch of RW2s to a format that can be easily encoded into a video is harder than encoding the video itself, and after a little fiddling I produced the following 38 seconds of video:
Besides being short and with a big view shift in the middle, this video has a couple problems:
You can see part of my window on the left
You can see everything in my apartment in the window reflections
There’s a HUGE pole in the middle of my frame.
Well, we can’t do anything about the third thing, but we can set out to fix the reflections and adjust our frame a bit to avoid any window panes.
I also ordered two things that night: a 64GB memory card, and an AC adapter for my camera.
Timelapse 2
January 18th
Great news!
My 64GB memory card arrived (Amazon prime!).
Bad news!
My camera is too old to support SDXC, the SD card format variant for capacities of 64GB and up.
I’ll have to make do with 32GB cards until I get a new camera, and 8GB cards until a 32GB card arrives (anyone want to buy a 64GB SDXC card?).
I decided to shoot in JPEG mode instead of RAW to at least get a little more space out of my memory card.
The video framerate is lowered to 20FPS to make it a little easier to see everything that’s happening, too.
Here’s the second video
Ouch, this one isn’t level at all.
But, on the upside it’s got no bad reflections, a great sunset, and it’s a full minute and 46 seconds, with only a minor blip when I swapped memory cards.
Timelapse 3
January 24th
Finally I have a larger memory card, and this time I’m determined to make use of it.
With 32GB capacity, my camera can hold almost 6000 JPG images, and at 15 seconds between frames that’s right around 24 hours.
I set everything up at 12:10PM and recorded a photo every 15 seconds until 12:20PM the next day, for a total of 5995 frames!
This time I taped a big dark green bedsheet behind the camera and amazingly, it worked great: there’s really no glare at all in this video.
Encoded, this makes a 3 minute, 52 second video, with over 24 hours captured.
Really, this one is quite beautiful.
The sky was overcast but the movement of the clouds is awesome, and sunset and sunrise are just fantastic.
Be sure to watch this one all the way through.
Timelapse 4
January 25th
Now that I have the basics down, its time to just keep repeating and making some interesting videos of different weather and events.
This one is 5776 photos from 1:34PM on the 25th to 1:26PM on the 26th.
Unlike the previous video, this one is much sunnier, which made glare a bigger problem than before.
Worse, the dark sheet I was using to keep most of the glare away fell down while I was recording (you can see it disappear 12 seconds in).
So while its not what I intended to capture, you can see a great reflection of the shadow of everything in my apartment, most notably my bikes, move across my apartment as the sun shifts position.
Timelapse 5
July 24th
I took a bit of a hiatus after the last one, but am back with a novel new technique.
Previously, my camera has been inside my apartment, ensuring that any glare, dirt, or reflections on my windows were captured in the video.
My bedroom however, has a ledge outside the window large enough to setup my tripod, so this time my camera is set up outside, and there’s no reflections at all.
One little mistake though, I forgot to clear my memory card before starting so this video is only 38 seconds long, and all you get is some daytime.
Timelapse 6
July 24th
With the memory card reset after a false start earlier in the day, we’re back with a full length video this time.
As I was hoping to really capture eventually, the clouds rolled in spectacularly that night, and it’s really quite fantastic to watch.
However, it also presents a problem: my camera’s autofocus seems to get confused.
As a result, the time between photos isn’t consistent, and even worse, some frames effectively have different zoom levels due to differences in how the camera chose to focus that frame.
Next time, I’ll be setting up manual focus before starting the timelapse, so this should go away.
The script
For those that might want to encode their own timelapses, here’s the script I used.
Last Saturday was the Stripe Hack to the Future hackathon.
I’ve been working on attending more tech talks and other events lately, and since Stripe is pretty well known and I only live a block away, this one was a no-brainer.
Naturally, they encouraged people to utilize Stripe, so I spent a few minutes the night before thinking about what to build.
I quickly decided it should be something involving small transactions, and that it should be relatively silly: not something that would look like a “real” business idea.
The idea
Near the front entrance of Hacker Dojo is a credit card reader, used to accept donations.
Unlike normal credit card readers, this one donates a random amount up to $20 on your behalf when you swipe your card.
While neat on its own, the real fun starts when you can get a bunch of people to all swipe one after another.
Often we can arrange to have a little prize like a t-shirt or coffee mug for the “winner” who ends up donating the most.
The money goes to a good cause, isn’t large enough to really care about, and the whole experience has a fun air competition: there’s a prize at stake, and suspense is high, but the outcome is not under anyones control.
With a little tweaking, I figured building a small web app basically simulating that same experience could be a lot of fun.
The Hackathon
I went over to Stripe HQ the next day and after having a quick bite to eat thanks to Stripe’s amazing culinary team, I set about recruiting people to get building.
Normally, even at an event like a hackathon where you know you have something in common with almost everyone there, it’s a little hard to just start talking to people.
But with the goal of starting work ASAP, and most people’s desire to get working on something too, it was easy to get things started.
Within about 30 minutes we had a team of four (the perfect size!) ready to get going.
Most amazingly, we not only quickly came up with a great name, but a name with an available .com domain name!
By the end of the night, we actually had made great progress.
There’s a lot more to be done, but there happens to be another hackathon this Saturday at RISE where we plan to finish everything up.
Lessons Learned
While what we built was really cool, the actual experience of building it was one of the most valuable experiences I’ve had in the last few months.
I’ve been reading and thinking about all sorts of startup and software development related topics lately, and this was a great chance to reflect upon them.
A few times during the hackathon particularly interesting thoughts crossed my mind.
They might not be completely unique, but they’re still powerful.
Stay flexible
Since I was basing my hackathon idea off of a real world experience, I was lucky to have an extremely clear idea of what I wanted to build.
I had specific interactions, pricing, and even wording in mind from the very start.
By the end of the night, none of the details of what we had built were similar, although the overall premise was preserved.
Initially, I was a little concerned when people started suggesting things directly in conflict with details that were, in my mind, already decided.
I realized I had to let go of any specific vision I had, and just let our project evolve with input from the entire team.
Partially, this was because I knew I had to ensure everyone on the team wanted to keep working, and if I was too firm on any particular detail, they might decide they weren’t interested in helping anymore.
Of course, there was absolutely no reason for me to believe that any preference I had for the direction of our project was automatically correct, and I have no doubt that the combined input of four people made it far better than I ever could have hoped to achieve on my own.
Urgency helps with decision making
Within about 10 minutes of gathering a small team, we had firmly made an incredible number of major decisions.
What language should we use? What hosting provider? Should we build a mobile app?
Those decisions alone could have taken weeks to decide at even a small company.
Initial versions of user flows and interactions took us about 15 minutes, but could have taken even a fast moving startup a while.
The single hardest choice for any company, what to name your product, was decided in 30 seconds.
There’s nothing special about anyone on the team that caused us to make decisions so quickly, we simply didn’t have a lot of time, and therefore had a strong sense of urgency.
No doubt there are thousands of smaller decisions we could have worried about, but more important than any of them was our need to just build stuff.
We didn’t spend any time talking about coding styles, indentation, or any of the other classic programming debates.
All software developers, of course, have extremely strong opinions on each of these topics, so not having to debate them was extremely refreshing.
Going forward, I’m going to focus on keeping the same sense of urgency for each and every project I work on, and hopefully all of them will be more successful.
Focus on the core of your project, outsource everything else
Another key to moving fast was to only spend time on the part of your project that makes it interesting, and let someone else take care of everything else you can.
No one cares about the efficiency of our web servers, even if we all would have enjoyed tweaking them, so we used Heroku and were live in minutes.
When we realized we needed a pub/sub system, we didn’t want to have to deal with setting up our own, so we let Pusher take care of it, and went back to working on something else.
For us, and for any team starting out, focusing only on what the user sees is the only way to go.
Leave everything else to someone else.
Keep your team productive
I came to this hackathon mostly expecting to gather a team, come up with a basic design, and then more or less and sit down and code.
In fact, for about two thirds of the night, I spent all my time just making sure everyone else could work.
At first, it might seem like this is silly.
If I too had just gotten to work, there would be four people working instead of three.
But look at it another way: which is better, to have three people working in a productive, uninterrupted state, or four people constantly having to get sidetracked.
I spent all night setting up dev apps, SSL certs, celery queues, and whatever else needed to be done.
Quite frankly, all the other team members worked on the hard, exciting stuff, and did a great job.
But they wouldn’t have done as awesome of a job without my unglamorous help from the sidelines, and thats a great feeling.
Advice for Stripe
I really want to thank Stripe for hosting this hackathon and letting everyone eat their tasty food, but I also want to give some feedback that might make future events better.
As a quick disclaimer I want to mention that I was fairly heads down for much of the night, so if I missed something, my apologies.
Do some sort of judging
While the event was ostensibly a hackathon, and indeed much hacking was done by several teams, it would be more correct to call it an office hours session.
Guests were in attendance, even working on their own projects, but there wasn’t much structure imposed by Stripe, and at the end of the day the event fizzled out with no strong conclusion from Stripe.
There’s something to be said for a low key gathering, but Stripe, next time you host a hackathon, go all in on actually making it a hackathon.
Have people register teams, do some sort of judging, give out a prize, the usual.
Just helping people organize into teams probably would have tripled the number of people seriously working.
On the other hand, a more structured hackathon might have made it harder for me to pick up new team members, so maybe I should be careful what I wish for.
Make your employees more active
As a small team using Stripe for the first time, I can’t think of a better place to have been working than in the Stripe offices.
Stripe employees were available to answer any questions we had, and they were overall really eager to help.
That said, it seemed like we were always having to seek them out.
Maybe I just missed it, but I would have loved to have been bugged by Stripe employees dropping by every couple minutes, just to chat with us about our project.
They could have quickly checked out how we were using Stripe, warned us before we ran into known problems, and helped us make things better than we could have on our own.
Today I received in the mail a brand new 3TB hard drive for storing my multitudes of bits.
While I was eager to get started using it, I couldn’t help but dig into all the fun details of the new technology I have acquired.
There’s two interesting considerations with a drive of this size and they both come down to something many people might not know much about: sectors.
A sector is basically a subdivision of usable space on a hard disk.
When your operating system wants some data from disk, it asks for data by sector.
For decades the standard size of a disk sector has remained unchanged: 512 bytes.
Recently however, two interesting things have happened.
First, with the release of hard drives with capacities larger than 2TB, more than 232 sectors are required to address all data on disk.
Unfortunately, the ubiquitous MBR partition table only supports up to 232 sectors per partition.
Second, hard drive manufacturers, in their never ending journey to give us more storage space, have realized that sectors of only 512 bytes no longer make sense.
By using 4KB sectors, it is actually possible to store more data on the same hard disk because each sector comes with some overhead used by the hard disk.
What does all this mean?
Most obviously, it requires that anyone wishing to use more than 2.2TB in a single disk use the new GUID Partition Table
(it’s possible to cleverly utilize more than 2.2TB of a single disk with multiple MBR partitions, but this often does not work with many operating systems).
Support for GPT is quite good amongst all operating systems now, and it is required for EFI, which is growing more common as well, so this is not much of an issue.
More insidiously however, it means that your hard disk is lying to you.
Since sectors have been 512 bytes literally for decades, our friendly hard drive manufacturers assumed that no operating systems would be ready to support sectors of any size other than 512 bytes (perhaps they assume programmers don’t always properly use named constants for values such as sector sizes, which of course is ridiculous).
Their clever solution was to have disks store data in 4KB sectors, but continue to advertise to the operating system that sectors are 512 bytes long, and then handle the bookkeeping to translate between the two themselves.
So now there are two sector sizes worth worrying about: the logical size – how your operating system talks to your hard disk, and the physical size – what your disk actually does internally.
This is all well and good, except that it breaks an implicit assumption about how much work a hard disk has to do when writing data.
Consider the case of an operating system writing to two consecutive 512 byte sectors.
With 512 byte physical sectors, this is assumed to require a total of 1024 bytes be written to disk (a hard disk will generally only read and write, at minimum, a whole sector, regardless of how much or little data actually changes).
But what if those two 512 byte logical sectors were not part of the same physical sector?
Your hard drive has to write both physical sectors, a total of 8192 bytes!
If you’ve read any literature about SSD performance over the last few years, you’ll recognize this problem: it’s known as write amplification and like anything where more work than required is done, it’s not good for performance.
So how much performance is lost with a misaligned partition?
Timothy Miller investigated by writing a small C program to force write amplification.
Curious, and always a sucker for small C programs, I ran his code myself.
Here’s my version:
The method is simple: write 4096 bytes to 1000 random locations.
By default, the program ensures that the write starts and ends at a 4KB sector boundary, but the first argument specifies an offset in 512 byte increments.
Any offset not evenly divisible by 8 will cause write amplification, and as it turns out, the performance penalty is serious:
spectre256@ocean ~ $ sudo time ./testWriteAmplification 0
0.00user 0.02system 0:16.17elapsed 0%CPU (0avgtext+0avgdata 1664maxresident)k
0inputs+0outputs (0major+144minor)pagefaults 0swaps
spectre256@ocean ~ $ sudo time ./testWriteAmplification 1
0.00user 0.04system 0:26.45elapsed 0%CPU (0avgtext+0avgdata 1664maxresident)k
0inputs+0outputs (0major+144minor)pagefaults 0swaps
This brings us to the dreaded A-word: alignment.
While occasional write amplification would be fine, what if your system was set up in such a way that write amplification is inevitable?
This is the danger of differing physical and logical sector sizes.
In fact, the default starting sector for many Windows partitions is 63.
This has lead many other tools to copy this default, leading to misalignment and reduced performance.
Some hard drives even internally shift all sectors by one so that such systems default to correct alignment.
Testing different alignments
While the test above showed serious theoretical performance reduction from misaligned writes, I wanted to know what would happen in the real world, so I devised some simple testing to investigate.
Sector 34 is the first available to start a new partition, after accounting for the space needed by GPT.
Since 34 is not evenly divisible by 8, a partition starting at sector 34 will not be properly aligned, and is a good choice for testing misaligned performance.
Sector 40 is the first possible correctly aligned sector, so I used this as the starting sector for the aligned partition.
Creating the partitions
Using sector 34 as the starting point, I created the misaligned partition using GNU Parted, and then created an ext4 filesystem:
ocean ~ # parted /dev/sdc
GNU Parted 3.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart ext4 0s -1s
Warning: You requested a partition from 0.00B to 3001GB (sectors 0..5860533167).
The closest location we can manage is 17.4kB to 3001GB (sectors 34..5860533134).
Is this still acceptable to you?
Yes/No? y
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? i
(parted) q
Information: You may need to update /etc/fstab.
ocean ~ # time mkfs.ext4 /dev/sdc1
mke2fs 1.42 (29-Nov-2011)
/dev/sdc1 alignment is offset by 3072 bytes.
This may result in very poor performance, (re)-partitioning suggested.
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183148544 inodes, 732566637 blocks
36628331 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
22357 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
real 0m29.931s
user 0m1.671s
sys 0m0.293s
Here’s the same procedure for the aligned partition:
ocean ~ # parted /dev/sdc
GNU Parted 3.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) rm 1
(parted) mkpart ext4 40s -1s
Warning: You requested a partition from 20.5kB to 3001GB (sectors 40..5860533167).
The closest location we can manage is 20.5kB to 3001GB (sectors 40..5860533134).
Is this still acceptable to you?
Yes/No? y
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? i
(parted) q
Information: You may need to update /etc/fstab.
ocean ~ # time mkfs.ext4 /dev/sdc1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
183148544 inodes, 732566636 blocks
36628331 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
22357 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
real 0m14.333s
user 0m1.646s
sys 0m0.289s
There are two interesting things to note.
First, mkfs warns you when your partition alignment is incorrect.
Second, the time to initialize the ext4 filesytem was significantly faster on the aligned partition, validating both the warning from mkfs and the initial testing.
Note that parted warns about improper alignment in BOTH cases.
It turns out parted is only happy with 1MB alignment (for SSDs), which is too conservative in this case.
Testing “real world” performance
To do my actual testing, I created a simple script that tested a small aspect of “real world” performance.
I wanted to test writing both small and large files, as well as some reads.
As a Gentoo user, I realized that simulating an update of the Portage ebuild tree would represent a good small file use case.
For those not familiar with Gentoo, the Portage ebuild tree is a collection of text files used to automate the compilation of system packages.
On my system, it currently consists of 137453 files in 23876 directories totaling 720MB on disk.
To simulate the action of updating the ebuild tree, I extracted an old and new snapshot to tmpfs,
then used rsync to copy the old, and then new snapshot to the same location on disk.
For large file performance, I tested copying a 4.4GB file from tmpfs to disk.
Here’s the full script that allows me to create and mount a new filesystem, run the tests, and then unmount the filesystem in one step:
1234567891011
#!/bin/bash -exmkfs.ext4 /dev/sdc1 > /dev/null
mount /dev/sdc1 /mnt/test
time rsync -aH /root/tmpfs/old/ /mnt/test
time rsync -aH /root/tmpfs/latest/ /mnt/test
time cp /root/tmpfs/bigfile /mnt/test
umount /mnt/test
Results
I ran my test setup 3 times for both the aligned and misaligned partiton, recreating the partition and filesystem after each test.
Here’s the average of all 3 tests results:
Rsync old snapshot
Rsync new snapshot
Copy big file
Misaligned Partition (sector 34)
9.046s
0.877s
45.837s
Correctly aligned partition (sector 40)
7.399s
0.939s
33.348
Speedup for correct alignment
18.2%
-7.0%
27.2%
Testing Conclusion
Based on the tests, there is a significant real world performance speedup when using a correctly aligned partition, both for large and small writes.
Interestingly, there is a small performance penalty shown during the second test.
I’m going to assume this test wasn’t valid: I grabbed portage snapshots only a few days apart, meaning the changes to be synced are minimal.
It’s doubtful that program execution times below one second are even accurate to be meaningful.
If someone else can come up with an explanation though, I’d love to hear it.
Future work?
After doing all this testing, I started to wonder if the partitions on my SSDs are aligned correctly. SSDs are even more prone to write amplification, partially due to the fact that flash storage generally has to erase in large blocks (up to 256kb).
Hopefully in the next couple weeks I’ll have time to write another blog post about it.
Unaligned performance with 512 byte sectors
Just for fun, I wanted to see if there was a theoretical performance penalty for 4KB writes on a hard drive with 512 byte physical sectors, so I ran the write amplification script on an old 640GB drive that my new 3TB drive is replacing.
12345678910
pismo ~ # time ./testWriteAmplification 0real 0m16.799s
user 0m0.000s
sys 0m0.046s
pismo ~ # time ./testWriteAmplification 1real 0m22.654s
user 0m0.000s
sys 0m0.066s
Surprisingly, there was a performance penalty, although not as significant (I ran the test multiple times and the performance is consistent with the times shown above).
I imagine even hard drives with 512 byte sectors are optimized for writes aligned at 4KB.
The takeaway here is that it’s important for all partitions, regardless of the underlying sector size, to be aligned correctly.
Reference
For a full summary of the state of 4KB sector issues, the Linux ATA wiki has a comprehensive page.
Full data from real world testing
For reference, here’s all the performance data from my test script.
Whenever you leave a job, the most important question is “where are you going next?”.
Having quit my last job in March, and not yet permanentely settled on anything, I have put considerable thought into this question.
Immediately, I knew quite a few things.
I wanted to work somewhere small, where my contibutions will be significant and varied, and I can learn many things.
I wanted to work with amazing people, who will push me to become better myself.
Finally, and most importantly, I wanted to work on something that without any reasonable doubt is a net positive for the world.
As a software developer in the bay area, especially one lucky enough to have had some great experience in only a few short years of employment, those first two criteria are not especially hard to meet.
But is it even possible for a business to be sure it is providing not just something that people will pay for, but something that is “good”.
Companies like AT&T and Sprint provide valuable services, but, as seems inevitable for large companies, provide terrible customer service, engage in shady lobbying and business dealings, and recieve near universal loathing even from their continued customers.
Meanwhile, providers of enjoyable, but dubiously valuable services like Facebook and Twitter are essentially in the position of having to degrade their free product in order to make money, again causing incredible discontent among their users.
Wouldn’t nearly any business find a way to anger someone in a way they can’t make a business case to remedy?
On the opposite end of the spectrum, well meaning people with a similar desire as myself have founded countless startups with chariable goals in mind.
There must be a thousand new companies intent on educating, feeding, or providing technology for those in less fortunate areas of the world founded this year alone.
hile their goal is always noble, I have talked with many of these companies and have yet never met one with a feasible business plan, and in fact many seem hopelessly naive in their disregard for profitability or general business sense.
Coming from me, this is a strong criticism.
When I first started looking for a new company to join in March, I mostly focused on consumer focused companies.
B2B companies, while often very profitable, seemed quite frankly boring.
While the technical challenges may be there, who wants to labor hard and long only to build better account tracking software?
Even worse, many of the innefficiencies, frustrations, and restrictions that pushed me away from a comfortable job at an increasingly large company are even more prevalent, indeed pervasive, with B2B.
I turned down invitations to interview at many B2B focused companies even with extremely interesting technical goals.
But as time went on, I became more and more frustrated with consumer focused startups.
Most startups (even many that are well known and have raised considerable money) barely can make the case that they are making something useful, and the chances of providing this useful thing profitably is near zero.
I have nothing against acquisitions, but a company that seems from the onset to have no exit other than an acqui-hire is unappearling.
Converseley there are plenty of companies that make things people want, but don’t need.
At best, these companies are like McDonalds: they make something no one should eat, but many claim to love anyway.
At worst they are like cigarette companies.
But over time, I noticed a trend among companies I truly respect and admire, and
with Yonas Beshawred’s recent post, I know what to call them: B2D companies.
And while just targeting software developers doesn’t instantly guarantee you’ll run an ethical and profitable company, there are a lot of reasons why it might be more likely.
Most obviously, what ever it is you’re doing, the expectation is simple: you are selling something that can save a considerable amount of time.
For pretty much every ever software developer, saving time is worth spending money.
Furthermore, software developers have specific needs and they will tolerate exactly zero bullshit.
Your product had better do what it says, with style, painlessly, or they will never come back.
If you’re lucky they will write a critical tweet on their way out.
If the description above makes you think selling a product to software developers isn’t easy, you’re right.
But really, that’s exactly the point.
Like many software developers, nowhere on my list of things I look for in a job say I don’t want a challenge, indeed I require a challenge.
Finally, as Paul Graham recently wrote, the best way to come up with an idea for a startup is to find a way to improve something you already know about.
For someone like myself where software development is more than a job or even career, but is in fact a way of life, solving other developers problems seems only natural.
I haven’t decicde what I’ll do next, but I know I’ll be paying close attention to any chance to make something for other developers.