All Posts

I want to start the year with some higher-level reflection on the venture industry and what I think will happen this year.

1. Fastest to $1B ever

In 2018, Bird shot like a rocket to a billion-dollar valuation. atlas_S1eKFGy-Q@2x.png

Without calling out specific startups, we’ve seen this capital flush environment propping up valuations quicker than ever before. Fred Wilson predicts that this era is over, but if we have yet another year of capital moats, then we will very well see a startup receive unicorn status in under a year.

2. Doubled growth in AI/ML startups

Year after year, acquisitions, funding, number of companies founded, etc… have all blown up for a once thought impossible to commercialize field. Now, according to IDC, global spending on AI/ML software will be $50B+ by the end of this year. While a lot of this will come from the big four, large markets grant large opportunities for founders. I believe that these categories of startups will collectively receive $20B+ to refine bleeding-edge research into disruptive tech across all sectors.

3. Interoperability will blossom

Interoperability is the ability for applications and services across different companies to interface with each other (think how you can post an Instagram picture to Twitter by flipping a switch). Given recent years of hardcore refinement in APIs, web standards, and data standards, we need our data and our services to be easily accessible from anywhere for maximized value. Even Apple allows you to see your iCloud photos from a web browser. Due to the maturity of common applications in personal and business life, I believe startups will continue to create a selling point to investors that their service enhances other services to the nth degree. The real contingency here is on the ability to build a strong business plan around this principle as subscription/ads don’t fit so nicely here.

I’ll check back in on these in December!

I often think about how beautiful the Unix-based operating system is with its processes, file structure, commands, packages, and open-source nature. In fact, it costs basically nothing once you have the hardware to use Linux, for example.

This got me wondering about the ever-increasing expense ecosystem for startups. You can argue that it is driven by the increasing availability of startup-oriented tools built by other startups, but the bottom line is brand new companies are spending a lot of money on a suite of services to make them run.

I’m not going to argue that these expenses aren’t useful, often they may be the factor between stagnation and hypergrowth. But in the spirit of the lean startup, I think we have a lot to learn from Unix.

1) More open-source, more free to use

It’s wonderful that Linux is free to use, especially if you’re an individual. Businesses are made up of individuals, so let’s follow the same motto? I imagine more and more tools used by startups will become free, or a competitor will come along that is free. Some examples are Typeform, Zoom, SendGrid, etc…. They have free options, but a business will most likely need to pay for actual usage.

2) Hierarchy, but with permissions and accessibility.

Some things should not be tampered with, and that’s why the root user exists on nix systems, but the user can, with permission access anything and visualize everything that is going on in the OS. A startup should emulate this by being transparent with all activities, even if not all the people can change the decisions of these processes.

The struggle to become a venture capitalist is something personal to me, and something that I’ve observed across the college-aged population in the United States.

It starts with gunning for a career as a software engineer, product manager, startup founder, etc… and leads to an intense desire to fight for a position to fund the next idea.

The venture capitalist has, in our eyes, become the modern-day rendition of a hedge fund manager, Hollywood actor, or leading politician.

The reason why is fairly simple, I believe. The past two decades of skyrocketing startups have driven the venture capitalist from a shadow position in the investment world to a digitally-enhanced, large-platform, clout-gaining influencer.


Unfortunately for us, the number of VCs needed to make the silicon engine turn is pathetically small compared to all the other startup & tech jobs. And this makes the incentive stronger than ever. Our networks become exponentially more valuable every year, which is an analog to finance in general.

The reason I wanted to write about this is that I think there is another reason, though more passive, that can drive an intense desire to become a VC.

The easiest way to explain this is to look at the concept of the technological singularity – where our technology will start building itself faster than we can keep up with.

The startup sprint has the potential to create a highly inequitable world, whereby rich founders from previous generations that are savvy investors stay relevant by owning top-level firms that encircle all innovation through talent attraction and venture funding.

We want to own the future because otherwise we will be left out.

This has been something that has taken me some time to understand in my head, so I leave it to the reader to seriously ponder this concept of equity ownership in a future high-tech world.

Thoughts on machine learning, feature engineering, and data pipelines for informed prediction, analytics, and insight from a summer internship.

I had a blast working for Topo Solutions this summer.

My role was in data science, and I had the flexibility of using this time to learn, experiment, and apply modern machine learning on client data.

I ended up with some fantastic results that even surprised me. The models will go on to assist the company push forward in applications of AI and predictive modeling. All this aside, taking the product-oriented approach to machine learning made me think a lot about business value…

Here are my two biggest takeaways on what is changing in the “data-driven” business:

1) Level 1 chaos is a growth strategy.

First Order Chaos doesn’t respond to prediction. If you predict the weather to some level of accuracy, that prediction will hold because the weather doesn’t adjust based on the prediction itself. Second Order Chaos is infinitely less predictable because it *does *respond to prediction. Examples include things like stocks and politics.

Growth strategies often fall into buckets tending to network effects, stickiness, pricing power, ecosystem lock, etc….

And almost universally today, clients generate data that can be looped back into the company’s product — feedback, A/B testing, feature request, etc….

If a predictive model can add value to the business through either the product or sales, then a level 1 chaos system is desirable. It proves iterative growth is accessible, and this, in turn, means greater future cash flow.


Say a bakery generates client profiles and collects buying habits for everyone coming in to purchase baked goods (a sort of silicon valley mutant of a bakery). We understand that the chaos of this system is first-order — reading this data and predicting what a customer will buy next time will not influence future sales (and the primary market is wholly owned by the business so information liquidity is low).

The bakery can iteratively use this data to improve (in a provable way). Some ideas might be using A/B testing to better the taste of the pastries that are most commonly sold, or lower the price of less desirable (but still worth producing) items.

The business product can improve and generate more cash.

2) Friction starting in the pipeline.

You could spend days reading about “zero friction” in business, but the underlying point is that we lose business value when both we and our customers waste time.

As an existing business becomes more data-driven, likely, the collected data and data collection policy will not scale perfectly (i.e. needing more columns, optimal storage format, etc…). If this is left undealt, the post-pipeline stages are hindered.

Complex data sources have necessitated data collection standards, which has reduced friction on the software implementation level. Still, inaccurate tagging, foreign importing, etc… will still slow down a good system.

When it becomes slow, expensive, or time-consuming to work with the data, data-driven decision making becomes less likely as people become impatient, and thus irrational.

For example, imagine we want to produce real-time analytics for customers using a platform we built to track study habits. If we allow 100% free-reign customization of the platform for the customers, we cannot quickly ingest the aggregate data for analytics, as we have no clue how the customer is collecting their data. We’ll have to rely on clustered reporting and hand-tuned pipelines, slowing down the entire data operation of the business on our end. We can improve this by still allowing maximum customization, but offering some sort of pre-determined tagging as an opt-in to facilitate business goals with the data.

This all can seem obvious, but knowing where friction is easy to find and diagnose makes business growth more realistic to foresee and depend on.

Thank you for reading, hope you found some value!