• Skip to main content

Ethereal Bits

Tyson Trautmann's musings on software, continuous delivery, management, & life.

  • Archive
  • About
  • Subscribe
  • LinkedIn
  • Twitter
  • Search

Off Topic

Universities Are Alive & Well

February 18, 2019 by Tyson Trautmann Leave a Comment

The impending demise of the university has been greatly overstated.

Since the launch of the first massive open online course (MOOC) back in 2006, a vocal group of university-dislikers have taken to social media aggressively to declare the university model obsolete and dead. The latest round of vitriol was seemingly spurred when Lambda School, a trade school that teaches coding to students online and uses an income sharing agreement in lieu of tuition, announced that they closed a $30M Series B funding round. University naysayers see new alternatives like Lambda School as attractive drop-in replacements for an outdated higher educational model.

Meanwhile, the data suggests that universities are alive and well, with most meaningful metrics moving up and to the right. Detractors rip the increasing toll of loans on graduates; a recent report from the Institute for College Access and Success claims that the average graduate takes on $28k in debt to attend school. But figures from a survey conducted by the Bureau of Labor Statistics show that attaining a university degree increases average annual earnings from $35k to $59k, dwarfing the cost of loan payments and showing that the broad financial value proposition of higher education is extremely attractive. Student demand for universities continues to skyrocket, particularly at top-tier schools. For example, the University of California Berkeley received 108k applications for admission in 2018, up 120% from the 49k that the university received in 2009. Demand from employers for graduates of top universities is also high. PayScale data shows that the average mid-career salary for a Stanford graduate grew from $112k in 2012 to $157k in 2018, an increase of 40% in just 6 years.

If students are turning out in record numbers to attend universities and the job market continues to place increasing value on a diploma from a high-caliber institution, why are people so quick to hate on schools and look for a new model? There are several good reasons. The first is that the current model is hitting scaling limits. 207M students around the globe are currently studying at higher-education institutions according to a paper published by UNESCO, but that’s still a relatively small fraction of the total population of 1B people between the ages of 16-24 and it would take decades to increase capacity by 2-4x with the current model. The College Board reports that the 10-year average tuition cost increase is roughly 5% per year, which is well ahead of inflation and will continue to make it even harder to make higher education accessible in poor areas. The second reason to challenge the current model is that it negatively impacts upward mobility by providing a disproportionately large opportunity for people from wealthy families. A report released by Opportunity Insights last year showed that 38 universities, including 5 of the 8 Ivy League schools, admitted more students from families in the top 1% of the income scale (families that earned more than $650k per year) than students from families in the bottom 60% (families that made under $60k per year).

Software scales infinitely, so it’s not surprising that people have embraced software platforms like MOOCs and software-powered coding boot camps as a means to make education scalable, provide access to all, and increase upward mobility. The problem is that current offerings like Lambda School that are being touted as university replacements are only offering a subset of what universities provide. The caliber of schools obviously varies wildly, but a good university offers students the following:

  1. Courses that provide deep knowledge in a subject of specialization. This includes the theory behind a subject, not just the practical. The theory is important because it provides a platform to understand the state of the art as practices evolve.
  2. Courses that provide broad knowledge in other subjects. Most innovation is going on at the intersection of multiple domains (eg. computer science and biology, cryptography and game theory) and jobs that are further from the bleeding edge will be earlier targets for automation. A general understanding of math, science, philosophy, ethics, and other subjects is more important than ever.
  3. An environment that encourages learning, innovation, and launching new ideas. There’s an unparalleled sense of energy that comes from dropping a group smart and ambitious students from multidisciplinary backgrounds into the same physical space. It’s not a coincidence that so many companies are founded on university campuses.
  4. A strong and diverse network. It’s impossible to overstate the importance of a strong network in business, and universities provide a unique channel to connect with fellow students and alumni to build a network.
  5. A credential. Achieving a degree from a university means that with some probability, the credential holder can work hard, is intellectually curious, can work in a team, and has achieved at least a base level of the kind of broad/deep knowledge mentioned above.

At best, current university competitors are offering bits of #1 (focused on practical knowledge, not theory) and a less valuable version of #5. That doesn’t mean that those competitors won’t ultimately be successful in disrupting universities over the long term. Lambda School is currently following Clayton Christensen’s disruption theory formula by serving customers at the low end of the market (students that don’t want to pay up front for tuition, so universities don’t want them) in a way that is “good enough”. But the combination of technical differentiation through software distribution and business model innovation through different payment models won’t be enough for those companies to go upstream into the broader market until they start thinking about the value proposition of universities more broadly and looking for creative ways to deliver on #1-5 above.

As a fan of both universities and Lambda School, I hope this happens because competition will ultimately result in a better product. We should all be cheering for innovation that increases access to education and levels the playing field for students. But the people that are proclaiming that universities are dead have jumped the gun.

On Cryptocurrency, Blockchain, & Cloud Computing

December 7, 2017 by Tyson Trautmann 1 Comment

 With Bitcoin’s price soaring, I’ve found myself spending a lot of cycles explaining why I’m still bullish on cryptocurrency and blockchain. I’ve also found myself in a number of conversations where I’m trying to convince friends that most of the talking heads from the financial world fundamentally don’t understand what blockchain is about or how it will change the game. This post is my attempt to channel those discussions and dispel several popular myths that are currently making the rounds on the Twittersphere. Along the way, I’m also going to try to convince you that the blockchain not only has the potential to revolutionize the financial world but is also poised to have a massive impact on cloud computing. Without further ado, let’s dive into some background on both computing and blockchain.

Blockchain as a Cloud Computer

Every computer can be broken down into two fundamental components: compute and storage. When you use your computer to multiply two numbers together, your computer loads (compute) values from memory or disk (storage) into registers (storage), adds (compute) the values together, and stores (compute) the result in another register (storage) where it can then be stored (compute) in memory or on disk (storage). From web surfing to spreadsheet-crunching to gaming, all computer applications boil down to computation that manipulates stored values in interesting ways.

Cloud computing is no different. Amazon Web Services, the leader in hosted public cloud, offers tens (if not hundreds) of unique services that can all be broken down into compute and storage that runs on Amazon’s massive infrastructure footprint. For example, AWS CodeBuild allows software developers to build and test their code in the cloud and then store the built artifacts in a data store like Amazon’s Simple Storage Service (S3). Like most of its services, Amazon bases the pricing for CodeBuild and S3 on the number of minutes that the underlying virtual machine uses (compute) and the number of terabytes used per month (storage) because the company understands that most of its value-add can be decomposed into compute and storage. AWS made $4.6B in revenue in Q3 of 2017 alone which represents a massive year-over-year revenue growth of 42%, so the global market for cloud computing products is clearly vibrant and growing quickly.

The simplest way to think about a blockchain is a big, distributed cloud computer that no single person, company, or government controls. Individuals called miners connect their computers to the blockchain so that they can be used to process transactions (compute) and write the results of those transactions to a digital ledger (storage). The mechanics of executing transactions depend on the blockchain implementation and are typically heavily rooted in cryptography, but in a proof-of-work system like Bitcoin, each mining node is taking a block of transactions and hashing them (compute) together with the hash of the previous node and a value called a nonce to create a unique hash value that fits a set of constraints. Once a valid hash is mined, the new block is broadcast (compute) to other nodes and the transactions in that block are executed and stored (compute) written to both lightweight and full nodes (storage) across the blockchain network.

Mining is computationally expensive, so the blockchain is orders of magnitude less efficient than comparable distributed compute and storage solutions, but it’s crucial to note that efficiency is intentionally exchanged for a different property: no one entity owns the system, yet the system can still facilitate computational transactions that involve multiple untrusted parties without introducing a trusted intermediary. If you think about the number of transactions that we participate in on a daily basis where we incur some cost to engage a trusted intermediary, this is a big deal. Facebook can monetize your data in undesired ways, PayPal takes a healthy cut to move your money around, and the government can always compel Amazon to delete or hand over data that it’s storing on your behalf.

In order to incent miners to give their compute and storage to the blockchain, the creators of Bitcoin developed the concept of a digital coin or cryptocurrency that can be exchanged in to run transactions on the blockchain. Anyone that wants to write to the blockchain ledger has to offer a small number of coins for their transaction to be processed. The cryptocurrency cost of executing a transaction on the blockchain is linked to the demand for running compute on the network and inversely linked to the computing power connected to the network. The cost in terms of a fiat currency like the US Dollar is also obviously linked to the going exchange rate between the fiat currency and the cryptocurrency, thus the cost in USD to run a transaction on the Bitcoin blockchain has increased steadily of late: in late Q3 of 2017 the cost of writing 200 bytes to the Bitcoin blockchain ledger within 30 minutes was roughly $3-4 USD worth of BTC.

Because of this need to move currency from one party to another, the original application that was baked into Bitcoin’s blockchain was the exchange of the Bitcoin cryptocurrency between parties. Newer blockchains have built on the Bitcoin foundation and embraced the idea of more generic Smart Contracts that allow for arbitrary code to be executed directly on the blockchain. For example, the creators of the Ethereum blockchain have implemented a runtime environment on blockchain for Smart Contracts written in a language called Solidity that is Turing complete, which means that in principle, it can be used to solve any computational problem. The result is that blockchains like Ethereum look similar to cloud computing service in that they allow for arbitrary distributed compute and storage, yet they also display the interesting property of being able to facilitate computation that involves multiple untrusted actors without a single trusted third-party controlling the service. Efforts are underway to bolt this kind of behavior onto the Bitcoin blockchain via sidechains like Rootstock.

That was a lot to grok, but it’s really impossible to critique the current commentary on blockchain and cryptocurrency without at least a high-level understanding of how the pieces fit together. So with that all in mind, let’s dive into a few recent criticisms from high-profile individuals in the financial sector about both Bitcoin specifically and blockchain and cryptocurrencies in general.

“Bitcoin doesn’t have any intrinsic value…”

One extremely common narrative from people that come from the financial world recently is that coins like Bitcoin don’t have any intrinsic value. Just a few days ago, Nobel Prize-winning economist Joseph Stiglitz said that “Bitcoin is successful only because of its potential for circumvention, lack of oversight. It doesn’t serve any socially useful function.” JPMorgan Chase CEO Jamie Dimon has claimed “the only value of Bitcoin is what the other guy will pay for it.”

If you made it through my quick primer above, then you already understand why Stiglitz and Dimon are incorrect. Coins like Bitcoin and Ether can be exchanged for compute and storage on a massive supercomputer with some very compelling properties that allow you to do things like executing transactions between untrusted parties without a trusted intermediary. If you believe that an increasing number of applications will be written and deployed on blockchain to disintermediate our life (and if those currencies are architected in a way that limits velocity) then the value of these currencies will inherently go up as the demand for compute and storage on blockchain increases.

It’s worth pausing here for a moment to mention that there are a few very different kinds of cryptocurrencies floating around today. The first kind is typically called a utility token that has the intrinsic property of being exchangeable for some kind of service. Bitcoin and Ether are both utility tokens because they have the intrinsic property (coded directly into the token and blockchain) of being exchangeable for compute and storage. The second kind of token is often called a tokenized security because it functions more like a traditional security that just happens to be exchangeable on a blockchain. A tokenized security has no intrinsic value but may have value ascribed to it by extrinsic means. For example, a legal contract may promise a share of the future revenue streams of a corporation pro rata to the holders of a specific kind of token. I suspect that people like Stiglitz and Dimon are completely missing the power of blockchain as a cloud computer, so they’re mistaking utility tokens for a tokenized security that is linked to very little value.

“I’m excited about blockchain, but not Bitcoin…”

A second popular thread is that Bitcoin and similar technologies are interesting, but not in their current implementation. One flavor of this attack is that blockchain technology is compelling but cryptocurrencies are not. Another related flavor is that the existing decentralized blockchain implementations will be replaced by blockchain implementations that are controlled by governments and corporations. World Bank President Jim Yong Kim noted that “blockchain technology is something that everyone is excited about, but we have to remember that Bitcoin is one of the very few instances.” He went on to emphasize that the importance of blockchain is the speed with which it can facilitate transactions, drawing parallels to Alibaba’s infrastructure that can facilitate large transactions in seconds. Former Federal Reserve chair Ben Bernanke espoused a similar view when he talked about how Bitcoin would fail but blockchain was interesting and would help federal banks improve their existing payment systems.

Again, this line of thinking is flawed. As noted above, blockchain is an intentionally inefficient system because it trades efficiency for decentralization. It’s hard to see why a bank or government would want to implement an inefficient technology for no reason if decentralization isn’t a desired property. Banks already run on digital systems that can facilitate transactions between people, so what exactly is blockchain bringing to the table? Further, that decentralization can only be maintained if coins are woven directly into the fabric of the blockchain to compensate miners, so the idea of implementing a blockchain without a linked cryptocurrency doesn’t make a lot of sense.

“Bitcoin is an unreliable store of value…”

Another part of the conversation about the value of Bitcoin is centered on whether it will prove to be an effective store of value. A store of value is a mechanism that allows people to preserve facilitate the exchange of wealth and to preserve wealth across both physical space and time. To accomplish this the medium of storage must exhibit a few properties: it must be liquid, it must be scarce, it must possess longevity, and people must be willing to assign a value to it. Historically, stores of value have included things like precious metals, gemstones, livestock, real estate, and fiat currencies.

Some recent challenges to the validity of cryptocurrencies like Bitcoin as a store of value have focused on whether the currencies will continue to exhibit those required properties of a store of value. In the article that was linked above, Jamie Dimon states that “governments are going to crush Bitcoin one day. Governments like to know where the money is, who has it and what you’re doing with it.” In essence, his comments are an attack on the longevity of Bitcoin as well as its liquidity in markets as they become more regulated. Economist and author Raoul Pal claimed that Bitcoin was an unreliable store of value because the group of developers that controls the underlying codebase can change the code: “Even if they don’t change the formula, the fact that they could? That’s enough to say it’s not a long-term store of value.” Pal’s statements cast a doubt on Bitcoin’s scarcity (since software engineers could “print more money”) and whether people should trust the people at the helm enough to assign value to the currency.

The reality is that any government that bans cryptocurrency will miss out on the next great wave of innovation. Where would the US be today if the government issued a ban on all HTTPS traffic because it disrupted the way that intelligence was previously collected? The risk of rogue updates to the codebase is slightly more real, but it’s important to note that there are three groups of actors in each blockchain ecosystem that work as a set of checks and balances against each other: developers, miners, and cryptocurrency holders. The split between Ethereum and Ethereum Classic is a real-life case study in what happens when those groups move in different directions, and it will forever be a warning to the development communities for other blockchains.

Where To From Here?

None of this means that Bitcoin and other cryptocurrencies are destined to continue their meteoric rise. Blockchains have real challenges as they try to scale; when an app like CryptoKitties pushes your network to its limits, you have work to do. Cryptocurrency exchanges are still a major vulnerability of the system and market manipulation is possible at current volumes. For example, it’s widely speculated that the current price of BTC is being propped up by the fraudulent issue of Tether and that if USDT and Bitfinex implode, they will bring all cryptocurrencies along for the ride. All of these risks are real.

But as both a Software Engineer and a VC, I can tell you that I see a lot of companies making big bets on blockchain and using it as the Operating System for applications that were previously impossible to build and will change our lives. Those apps aren’t in production or operating at scale yet, so the analogies between the current environment and the dotcom bubble are reasonable: there may be a crash that is followed by a long period where apps are deployed, adoption grows, and the ecosystem justifies the valuation. Or, maybe, the current lofty valuation on cryptocurrencies is correct for a technology that has the potential to disrupt both the financial sector and cloud computing and near-term growth will continue.

When Clay Christensen introduced the concept of “disruptive innovation” in The Innovator’s Dilemma, he explained that incumbents can’t pursue disruptive innovation when it first arises because the opportunities aren’t profitable enough and the development of disruptive innovation would take scarce resources away from sustaining innovation that is required to keep up with the competition. As the disruptive innovation matures, it begins to capture share up-market, and the incumbent can’t react quickly enough.

Jamie Dimon claims that blockchain isn’t worth his attention because JPMorgan Chase moves $6T in money around the world every day while the daily trading volume of all cryptocurrencies is around $10B. Ironically, with a total market cap of roughly $370B, the basket of all of the cryptocurrencies in the world is now more valuable than JPMorgan Chase. Are major industries going to be disrupted in the next decade? Time will tell, but I’m betting on crypto.

How Video Games Made Me A Better Software Engineer (& Dad!)

October 30, 2014 by Tyson Trautmann 11 Comments

About six months ago I left an amazing job at Amazon for a very different, yet equally amazing job at Riot Games. I won’t bore you with the laundry list of factors that went into my decision, but I will confess that one of the many factors was my life-long love of video games. I’m a bit quirky in the way that I play video games because I can’t play a game casually. When I pick up a game, I play purely to master the game and to challenge myself (and possibly my team, depending on the game) to see how good I/we can be. As crazy as it sounds, that constant quest for mastery has taught me a valuable lesson that not only has made me better at my job as an engineering manager, but has helped me to grow in other areas of my life.

Before I hit you with the punch line, let me give you some quick background to help set the stage. As you may or may not know, Riot Games produces a very popular game called League of Legends that pits two teams of five players against each other in a ~20–60 minute battle to destroy the other team’s Nexus before they destroy yours. League of Legends is one of those games that is relatively quick to learn, but takes a lifetime to master because of the complexity of gameplay. Any player can try to work his or her way up the game’s elo rating system, which is broken down into several divisions: bronze, silver, gold, platinum, diamond, master, and challenger.

When I first started trying to climb the elo ladder, I was able to work my way from bronze to silver by just grinding out a bunch of games. As I was playing, I was building my “mechanics” and learning basic concepts that allowed me to improve fairly quickly. As I kept playing, however, my progress stalled out before I was able to hit gold. That’s when it struck me that if I wanted to improve, I would have to actively start doing things to improve. I wasn’t going to get better by just putting my time in, playing game after game, and making the same mistakes over and over again.

That same concept of mastery applies to almost every area of our lives. When I landed my first job as a software developer, I had so much to learn that I could build my development chops by simply doing my job. At some point that ceased to be true and I had to start doing very intentional things to continue to improve. Sometimes that meant seeking out seasoned veterans for some pair programming, and other times it meant changing teams to work in a new domain or with a new set of tools.

IMHO, the hardest part of improving at something is 1) identifying when we’ve hit our natural plateau and we’re just grinding it out without getting better, 2) deciding that we actually want to invest the immense time and energy needed to get better, and then 3) taking some time where we are very intentionally in the “stretch zone” and practicing for mastery. This season I’ve advanced to platinum in League of Legends, and the only way I was able to accomplish that goal was by setting aside a chunk of play time every week where I wrote down a specific goal (which could be something like “die 3 or less times”, or “kill 85 minions by the 10-minute mark”), focused on achieving that goal while I played, and sometimes watched replays of my games to find mistakes and figure out what goals I should set in the future. As an engineering manager, I put myself in the stretch zone by keeping my personal development plan (PDP) relevant and up-to-date, spending quality time learning from mentors each week, reading books and blogs that are written by other managers that I respect, and continually collecting feedback from the folks that I’m managing on how I can be more effective and using that feedback to drive new goals into my PDP. As a husband and a dad, I get in the stretch zone by sitting down with my wife every Sunday evening and talking through how things are going at home and using those discussions to pick a few things to focus on for the week.

There are a lot of other areas in my life where I’m intentionally not putting in the effort to get in the stretch zone and improve, and I’m fine with that because I only have a finite amount of time and focus. I love playing golf and would like to be a better golfer, but right now I’m just hitting the course occasionally and playing for fun. I suspect very few people have the discipline and the mental focus to context switch and really improve at more than about three things at a time.

I leave you with this challenge: Identify one thing that you want to get better at, and come up with a plan to get into the stretch zone at least a few times a week for the next month. Then leave me a comment below and let me know how your experience went. And the next time someone tells you to quit playing video games and do something productive, tell them that you’re learning valuable lessons that apply to the rest of your life.

The Final Nail In The Windows Coffin

May 3, 2013 by Tyson Trautmann Leave a Comment

I generally boot over to Windows for one of 2 reasons: to play games, or to use Office. The rest of my time is happily spent in Ubuntu. I’ve been under the impression that people generally use Windows because it’s more “polished”. My mother is never going to be able to hack away at the command line or understand the dark magics of device drivers, so she needs the neat and tidy packaging that Microsoft offers. Tonight I decided to upgrade from Windows 7 to 8, and it was the worst experience possible. My motivation was that my Windows 7 installation had developed a weird tendency to BSOD (for seemingly random reasons after some debugging) with the dreaded “Page Fault in Nonpaged Area” message, so I figured I would try a clean OS install and thought I would upgrade in the process to see what Windows 8 is all about.

I started by downloading the Microsoft Windows Update utility, as recommended. I went through the steps and was told that I had two purchase options: Windows 8, or Windows 8 Pro. The former was $120, so I spent a while poking around looking for a way to select that I wanted the less expensive “Upgrade” version. I couldn’t figure it out, so I eventually caved and bought the full meal deal. I’m a firm believer in clean installation for Operating Systems based on some anecdotal past experiences, so I downloaded the ISO and burned a DVD. A few minutes later I was booted into the installation utility and was ready to install.

That’s when I hit my first speed bump. When I selected the appropriate disk, Windows told me that it couldn’t create a new partition or locate an existing one, and that I should check the setup log files for more info. I had everything important backed up to Dropbox, so I tried deleting the partition, formatting, and every other option available to me. I reboot and went through the process again with the same result. Before hunting down where the “setup log files” were, I hit Google on my cell phone and stumbled on this article and tried the command line partitioning utilities that were suggested. I rebooted again, and still no dice. After a lot of tinkering, I ended up having to unplug my other drives including the one where Linux was installed and reboot the computer, and then things magically worked.

I hadn’t ever messed with Windows 8, so I surprised to be greeted by no start button and no immediately obvious way to launch applications. I was told that I needed to activate Windows, and asked to re-enter my Product Key that I had already entered a million times while trying to get the installation working (fortunately by this point I had it practically memorized). When I tried to activate I got an error message telling me that my product key could only be used to upgrade Windows, despite the fact that I had been using Windows 7 just an hour prior, was under the impression that I bought a full non-upgrade version of Windows 8, and didn’t see any clear warnings to this effect during the purchase process. I went back to Google and poked around for a while and found this suggestion on hacking the registry to make activation work anyway, which seemed to do the trick.

Next I tried to change the wallpaper from the ugly flower, and that didn’t work without any obvious error messages. I was able to click on other images, but all I saw was weird flashing behavior in the edges of the window and the background didn’t change. Again per Google it sounds like I may need to wait until a while after activation to change my wallpaper, which is just bizarre.

I started downloading apps, and when I hit the Skype site they sent me over to the Windows App Store to download it. Inconveniently, there was no clear visual indication how to get back to my desktop from the Metro style UI. I started trying to poke around with Metro and was annoyed at how poorly it’s visual metaphor seemed to map to the mouse and keyboard, so I searched around for a way to permanently disable Metro for desktop users. Unfortunately that seems to require downloading (or in most cases purchasing) a separate application, which seems absurd.

The icing on the cake was that on my next reboot, I again hit the new and slightly less ugly BSOD with the same error that I was getting before. Both the Windows and Linux memory and disk analysis tools seem to suggest that all is fine on the hardware front, and I have yet to have any issues with Ubuntu which is running on the same machine down to the disk. I guess I’m back to trying to troubleshoot that issue later tonight.

After multiple hours of just trying to get things up and running, I’m to picture my mom buying the latest version of Windows because of “ease of use” and having to run disk partitioning utilities from the command line and edit registry keys. Clearly that ain’t happening. I’m also flashing back to how seamless and straightforward installing Ubuntu was last time around. If my experience isn’t atypical, then I think the final nail has been driven into the Windows coffin. That may sound like a sensational claim, but Windows has already lost the battle for mobile to Android (and to a lesser extent these days, iOS) and more and more of computing is moving away from the desktop. At some point individuals and companies that do use desktops for niche activities aren’t going to be willing to pay $120 for an inferior product to something that they can get for free, particularly if they’re already having to retrain habits because existing UI conventions are already broken in any option.

I’m excited that Steam is out for Linux because it feels like that may start a movement for PC games to ship on non-Windows operating systems. Now if only I can get Office working with Wine, I will never have to boot over to Windows again…

The New Platform War

October 18, 2012 by Tyson Trautmann 3 Comments

There’s a new battle raging for customer eyeballs, application developers, and ultimately… dollar signs. To set the stage, flash back to the first platform war: the OS. Windows sat entrenched as the unassailable heavyweight, with Linux and Mac OS barely on the scene as fringe contenders. But the ultimate demise of Windows’ platform dominance didn’t come from another OS at all; it came from the move to the browser. Microsoft initially saw the problem and nipped it in the bud by packaging IE with Windows, then tried to prolong the inevitable by locking the IE team away in a dark basement and trying to stifle browser innovation by favoring closed solutions for browser development like Silverlight instead of open standards like HTML 5. That strategy clearly wouldn’t work forever, and the net result was a big boost in the market share of competing browsers like Firefox and ultimately Chrome. Suddenly people weren’t writing native Windows apps anymore, they were writing applications that ran in the browser and could run on any OS.

The pattern of trumping a dominant platform by building at a higher level has repeated itself many times since. In some sense Google subverted platform power from the browser by becoming the only discovery mechanism for browser apps. When social burst onto the scene Facebook and Twitter became king of the hill by changing the game again. The move to mobile devices has created a bit of a flashback to the days of OS platform dominance, but it’s inevitably a temporary shift. At some point history will repeat itself as devices will continue to become more powerful, standards will prevail, and developers will insist on a way to avoid writing the same app for multiple platforms.

Which brings us to today, as the platforms du jour are again threatened. In this iteration the challenger to the dominance of Facebook and Twitter is the domain specific social apps that are built on top of them. When social network users share their status with friends, text + images + location isn’t enough anymore. Different kinds of activities call for customized mechanisms of data entry and ways to share the data that are tailored for the experience. For instance, when I play 18 holes of golf I enter and share my data with Golfshot GPS, which makes data entry a joy by providing me yardages and information about the course and gives my friends the ability to see very granular details on my round when I share. When I drink a beer I share with Untappd, when I eat at a restaurant I share a Yelp review, if I want to share a panoramic view I use Panorama 360. Even the basic functions like sharing photos and location work better with Instagram and Foursquare than Facebook’s built in mechanisms.

The social networks will never be able to provide this kind of rich interaction for every experience, and they shouldn’t attempt to. At the same time they run the risk of the higher level apps becoming the social network and stealing eyeballs; a position which some apps like Foursquare clearly already have their eyes on. For power users these apps have already made themselves the place to go and enter domain specific data. That trend will continue to expand into the mainstream as people continue to dream up rich ways to capture real life experiences through customized apps. To use the OS analogy: there’s no way that Microsoft can dream up everything that people want to build on top of Windows and bake it into the OS, nor would it be a good thing for consumers if they could.

It will be interesting to see how Facebook and Twitter respond to the trend. I suspect that users will continue to move towards domain specific apps for sharing, but that the social networks will remain the place to browse aggregated status for friends across specific domains. Unless, of course, the owners of the highest profile apps somehow manage to get together and develop an open standard for sharing/storing data and create an alternative browse experience across apps to avoid being limited by the whims of Facebook and Twitter and the limitations on their APIs.

The Physical Versus The Digital

July 12, 2012 by Tyson Trautmann 2 Comments

I don’t want to buy things twice. I’m even more hesitant to pay again for intellectual property, which costs little or nothing to clone. I don’t want to buy Angry Birds for my iPhone, Kindle Fire, PC, and Xbox 360. I’m even crankier about buying digital goods when I’ve already bought the IP via physical media. I want the convenience of reading my old college text books on my Kindle without buying them again, and I shouldn’t have to. I hate the dilemma of trying to figure out whether to order my grad school textbooks digitally (because it’s lightweight, convenient, and portable) or not (because the pictures render properly, it’s handier to browse, and looks cooler on the shelf). Maybe I’m in the minority here, but I’m also too lazy to consider buying and setting up a DIY Book Scanner.

Anyone who reads, plays games, or listens to music has shelves or boxes of books, NES cartridges, or CD’s that they probably don’t use often and don’t know what to do with. I would love the option to fire up RBI Baseball or reread Storm of Swords on modern devices with the push of a button, but it’s not worth storing the physical media and/or keeping obsolete devices around.

My frustration has caused me to conclude the relatively obvious: some company needs to offer a way to send back physical media along with a nominal fee in trade for the digital version. The physical media could be resold second hand or donated to charitable causes, and the folks ditching their physical media could access the things that they have already paid for in a more convenient format. Amazon is the one company that seems poised to make this happen given that they deal in both physical/digital and they have efficient content delivery mechanisms in place for goods of both kinds. Is there a financial model that makes swapping physical for digital work for all parties involved, and is it something that will ever happen?

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Copyright © 2021 · Atmosphere Pro on Genesis Framework · WordPress · Log in