Frans Ekman's blog


View Frans Ekman's profile on LinkedIn

Diving into Customer-Facing Analytics

About a year ago, I announced in a blog post that I was starting a small side hustle to dive deeper into product analytics. Since then, I’ve been actively sharing insights on X and connecting with fellow entrepreneurs. We’ve exchanged best practices, and I’ve been sharing ideas publicly. Recently, an interesting area of analytics has captured my attention: Customer-Facing Analytics.

Typically, when we talk about analytics, we’re referring to BI or other types of internal analytics. But analytics can go far beyond that. Sometimes, we want to offer analytics directly to users as a feature—whether through a comprehensive dashboard or smaller analytics elements integrated throughout an app. Even displaying wait times in a food delivery app is an example of customer-facing analytics.

Throughout my career, I’ve built numerous dashboards for both internal and customer-facing use. For internal dashboards, there are plenty of tools available, like Tableau, Qlik, and Power BI, along with countless other options. However, I have yet to find a tool for customer-facing dashboards that fully meets our needs. This gap led me to dive deeper into the field, connect with others who share this need as well, and learn more.

I recently launched a website for this project, aimed at creating the ideal tool for building customer-facing dashboards: CustomerDashboard.io. Feel free to check it out and share your thoughts! I’ll keep you updated as the project evolves.

Viral Cohort Analysis

Virality occurs when your product spreads from customer to customer. This is often the cheapest way to acquire new customers. Apps usually provide mechanisms to help users invite friends or offer referral programs as incentives. Achieving success requires extensive experimentation and the use of right metrics for accurate analysis.

Virality is typically measured and modelled with two key metrics: Viral Coefficient and Viral Cycle Time. These metrics are great for theoretical models but difficult to measure and work with in practice. To address this, I use Viral Cohort Analysis. This helps uncover the nature of a product’s virality, allowing me to determine workable values for the Viral Coefficient and Viral Cycle Time.

First, let’s examine the two traditional metrics used to model virality:

Viral Coefficient measures how many new customers each existing customer invites to the platform. A viral coefficient above 1 means exponential growth, as each customer brings in more than one new customer, rapidly expanding the customer base.

Viral Cycle Time is the average time from a new customer’s signup to when their invitees sign up. This process involves several steps: the customer learns to use the app, realizes its value, and then starts inviting others.

Optimizing these two metrics can dramatically accelerate your business growth!

For instance, imagine beginning with 1,000 customers. If your viral coefficient is 2 and the viral cycle time is 2 months, you would reach 63,000 customers in 10 months. However, increasing the viral coefficient to 4 could skyrocket your customer base to 1,365,000 in the same period.

Alternatively, reducing the cycle time to 1 month while keeping the viral coefficient at 2 could increase your numbers to 2,047,000 customers in 10 months. These three growth scenarios are illustrated in Figure 1

Figure 1: Illustration of how a user base grows based on viral coefficient and viral cycle time.

This highlights the incredible potential of virality and the importance of short viral cycle times. It’s crucial to encourage users to invite their friends as early as possible.

It’s worth mentioning that, in reality, achieving and maintaining a viral coefficient above 1 is rare and often unsustainable over long periods. For most apps, referral mechanisms are primarily a way to reduce Customer Acquisition Cost (CAC) slightly.

What’s the problem with these two metrics?

When using these two metrics in simulations, a user waits the Viral Cycle Time and then invites the Viral Coefficient’s number of users all at once. After that, they stop inviting anyone. This isn’t how it works in the real world.

Users usually spread out their invitations over an extended period, rather than sending them all at once. An approach to address this is setting specific timeframes for measurement. For example, you can consider only the users invited within the first 90 days after signup. While this approach isn’t perfect, it helps in standardizing measurements and yields more meaningful comparisons, especially during A/B testing.

You might wonder where to set the cutoff for measurements? 30 days, 90 days, or more? It’s hard to enforce such limits without understanding how virality works in your app. This is where Viral Cohort Analysis comes into play.

Welcome to the world of Viral Cohort Analysis:

I haven’t found the term “Viral Cohort Analysis” or “Referral Cohort Analysis” used anywhere yet, and I wonder if this concept has another name. I’m sure this type of analysis has been done before because it’s very useful.

If you’re not familiar with cohort analysis, check out my earlier post. Viral cohort analysis is simply a cohort analysis of how users invite new users.

You divide customers or users into cohorts based on the week (or month) they signed up. Then for each cohort you track how many new customers that cohort invited. See table 1.


Table 1: Each row represents a cohort of users who signed up during the week in the first column. The second column is the number of users in the cohort. The columns w1-w10 represent the 10 weeks following signup, indicating the number of users the cohort invited during each corresponding week.


Next, normalize the numbers by dividing them by the number of customers in the corresponding cohort, as shown in Table 2. This makes the numbers comparable between cohorts and the values represent weekly Viral Coefficients. This helps you better understand how virality behaves in your app.


Table 2: This table shows the normalized results from Table 1. The number of new users invited is divided by the total users in the cohort. This makes the numbers more meaningful. For example, a user from the Week 1 cohort will invite 0.01 new users during the first week, 0.15 during the second week, and so on.


You can sum these numbers to get a viral coefficient for any period of your choice. For example, take the cohort of users who signed up during Week 1 and sum the viral coefficients for W1-W4 to get 0.50. Doing this for the other cohorts, you get 0.54 for Week 2 cohort, 0.48 for Week 3 cohort, and 0.42 for Week 4 cohort. The average of these would be 0.49.

With this approach we got a viral coefficient of 0.49 and a viral cycle time of 4 weeks to use in your simulations. This is the lower bound for growth, since in reality the growth is much higher. This is because many of the invites happen before the week 4 and some additional invites will still happen after week 4.

I chose the 4-week timeframe because most invites happen within the first 4 weeks for the app in my example. Week 6 would also be a reasonable cutoff point.

For more accuracy, you could model the invites for each week after signup in your simulations. This way, each user invites the corresponding number of new users each week, resulting in a more precise growth projection. However, in my experience, this is rarely necessary, as no simulation can predict the future accurately. The world changes, your app changes, and users change.

One could argue that a cohort will keep inviting new users indefinitely, resulting in a distribution with a long tail. You could create a probability distribution for invites and use that in your simulations. However, I don’t think it’s necessary for the reasons mentioned above. Additionally, the tail won’t significantly impact growth since the cycle time for those invites is very long.

Where viral cohort analysis is most useful:

I find Viral Cohort Analysis most useful when A/B testing changes to invite mechanisms. These tests might include prompting users for referrals earlier and more often or experimenting with different incentives.

Yu-kai Chou, in his book Actionable Gamification, emphasizes asking users to invite friends only after they reach the “first major win state” and understand the value of your service. Conversely, others like Andrew Chen recommend asking for referrals frequently, even immediately after onboarding.

This is likely not a black-and-white issue and varies between apps. It’s crucial to A/B test to find out what works, and Viral Cohort Analysis helps understand the impact of changes. You can quickly see if invites happen earlier and whether the total invites decrease. In some cases, this might be a favorable tradeoff, as illustrated in Figure 1.

It’s important to note that spamming users with too many popups and messages to refer friends can have negative effects. Therefore, it’s vital to monitor other metrics, such as churn and user activity.

In summary, I hope you now feel better equipped to measure and work with traditional virality metrics—viral coefficient and viral cycle time—and determine them using Viral Cohort Analysis. Additionally, when you experiment with virality features in your app, you should be able to A/B test and use Viral Cohort Analysis to determine the true impact.

I will explore more in future posts how product analytics can can help improve your product and increase revenue. Meanwhile, let me know if you need any help setting up your analytics or if you have other questions related to this topic.

Is your Product a Leaky Bucket? Use Cohort Analysis to X-Ray it

Before investing heavily in marketing and user acquisition, it is vital to know if your product is good enough and can keep the users. Otherwise you will just be pouring more water into a leaky bucket. Unfortunately, it is not always straightforward to spot this, and your typical key metrics can be deceiving. If you have a leak, you need to know where it is and how to fix it.

Many of us know that counting total users is a vanity metric. It’s more useful to count active users. Total users will keep growing and the metric won’t tell when things start to decline. Therefore, measuring active users is more meaningful. However, even the active users count doesn’t reveal all underlying problems. Let’s explore this further.

Take a look at the diagram below (figure 1):

Figure 1: Graph illustrating the growth of total active users of an app over time

This looks somewhat ok. Apart from slow growth, no major problems are apparent yet. You might have heard complaints about high marketing spend without results or that new users are being acquired at the same rate as before, but growth has suddenly stalled. From this graph it does not appear like users would be churning (leaving). What’s happening?

Analyzing user growth and retention through total active users over time can be deceiving. It’s hard to tell if growth is due to acquiring new users faster than old ones are leaving. This scenario is like filling a leaky bucket – unsustainable in the long run. Without the right granularity in the data, it’s like trying to understand a book by only looking at the cover.

Enter Cohort Analysis:

By splitting users into cohorts (groups) based on the month they signed up we can better reveal what is happening beneeth the surface. Take a look at the table 1.


Table 1: Each row represents a cohort (group) of users who signed up during the corresponding month. The columns labeled 0-6 show the number of users still active (i.e., not churned) after the given number of months. For example, the 0 column indicates how many users initially signed up.

Here, each cohort of users who signed up during a specific month is shown in its own row. For each cohort, the table displays the number of active users remaining after 0-6 months. For example, the first row represents the February cohort, i.e., users who signed up any time during February. This cohort started with 6,551 users, listed under column 0. After 1 month, only 1,987 of these users remained active; after 2 months, 1,319 remained. Finally, after 6 months, only 211 users were still active.

You might wonder why only the February cohort has measurements up to month 6, while the August cohort only has data for month 0. The reason is that this example dataset only contains data up until the end of August, so each cohort only has data from its start until the end of August. Therefore, earlier cohorts have been measured longer than newer ones.

It’s often convenient to display these numbers as percentages of the users still active from the original cohort, instead of absolute numbers. See Table 2 for this representation. This representation allows easier comparison between cohorts.


Table 2: Percentage of users still active in the cohort after 0-6 months.

By looking at the numbers, it’s clear this example product is a leaky bucket. Most users do not stay past the first month, and only 7% remain after 4 months.

It is also possible to illustrate this in the total active users chart by coloring the users by their signup month. We end up with a stacked diagram, as shown in Figure 2, where it becomes clear that the majority of active users at any given time are primarily new users.

Figure 2: Stacked diagram displaying total active users colored by signup month

User retention is heavily dependent on the app category. There isn’t much optimization one can do to make dramatic improvements to what is typical for the app category, unless you completely reinvent the category. For example, dating apps will have poor retention because people find a partner and leave. In fact, the better a dating app is at matching users, the sooner they leave.

A bad product will have considerably worse retention than other similar apps in the category, indicating plenty of room for improvement. Therefore, it is crucial to measure and conduct cohort analysis correctly to understand how the product is performing, identify where the problems are, and determine how to address them.

You should look for the months or weeks where the highest drops occur. Try to understand why users leave during these times. Track their actions within your product at a more detailed level to understand what is happening. Interview some of these users to uncover the true reasons behind their actions. Data can tell you the “what,” but you often need to talk to your users to learn the “why.”

You can take cohort analysis further by studying how users from different sources behave. Sometimes the problem isn’t your product but the type of users you acquired. For example, you might have acquired a certain type of user with a misleading ad campaign, leading them to churn while your ideal users stay. To find these issues, you need to segment your users by:

  • Acquisition source (which campaign, site, etc. they came from)
  • Device
  • Country
  • Demographics
  • And much more

Then study these segments individually to get a clearer picture. Do cohort analysis for each and compare.

It’s worth mentioning that cohort analysis is not limited to just user retention. You can also look at revenue, such as Monthly Revenue per User (MRPU), engagement, specific actions users take in the app, and much more.

Similarly, you can have a leaky revenue bucket even if your users are not churning. For example, users might try out some paid features initially but then decide to only use the free ones.

Now you should be equipped with the basic tools to study user retention of your product with cohort analysis and determine if your product is leaking. Hopefully, this will guide you in making the right key decisions, such as which holes should be plugged or whether you should invest heavily in new user acquisition.

I will explore more ways cohort analysis can be used to improve your product and increase revenue in future posts. Meanwhile, let me know if you need any help doing cohort analysis for your product or if you have other questions related to analytics.

In Search of the Ultimate Product Analytics Playbook

Over the past two decades, I’ve ventured through the entrepreneurial landscape, serving as a CTO and Interim CTO for five distinct companies, and offering advisory roles to a few others. Through it all, one consistent challenge has stood out: product analytics. It has always been a headache, consuming more hours than I’d care to admit.

In my journey, I’ve experienced the dynamics of both small and large teams. When starting out and the teams were small, the responsibility of product analytics invariably landed on my desk. As our operations expanded, we onboarded professionals to ease the load – initially business analysts, and then a mix of seasoned analysts and data engineers. Yet, despite these additions, navigating the analytics terrain remained a challenging, costly endeavor, often falling short of perfect execution.

Beginning a venture with limited funds often brings its unique set of challenges. One of the primary dilemmas is resource allocation. It’s tough to justify hiring analysts when the immediate need revolves around either developing or selling the product. However, understanding the product’s usage is paramount; it feeds into critical decision-making processes. A common misconception is that every bit of data is already tracked, just waiting to be queried. Many assume that any developer with database access can instantly provide answers to questions like, “How many users who signed up in March are still using the product weekly?” But in reality, it’s rarely that straightforward.

I completely grasp the complexities behind analytics, but I’ve always been hesitant about pouring substantial resources into it. Whether it’s shelling out for premium tools or hiring specialists, it’s been a tough call. So, more often than not, I find myself dedicating a ton of my own time to get the insights we need.

Throughout the years, I’ve been on a continuous quest to refine my analytics approach. I’ve tested various tools, delved into numerous books and blog articles, and gained insights from some truly brilliant minds in the field. While I’ve also made my share of missteps, it’s clear to me that things are gradually becoming more manageable. Yet, there’s still this lingering challenge. I can’t shake the feeling that many of the routine aspects of analytics could be streamlined further.

I’m confident that the insights I’ve gathered over the years could spare many startups from the pitfalls I’ve encountered. At the same time, I’m ever-curious and believe there’s a wealth of knowledge out there that can further hone my expertise and take me forward.

We’re navigating through some truly fascinating times in the tech world. The landscape is teeming with groundbreaking tools and advancements. Among these, the potential of AI in revolutionizing the field of analytics stands out. I’m eagerly watching its progression, curious about how it’ll mesh with what we know and perhaps transform our established methods.

I’m eager to dive deeper into this realm. Connecting with fellow entrepreneurs, CTOs, product experts, and business analysts is high on my agenda. There’s so much value in understanding best practices and swapping ideas. As I journey through, you can expect to see my thoughts and learnings shared on Twitter, my blog, and other platforms. It’s a way for me to both seek insightful feedback and contribute back to our vibrant community.

Seasoned developers, especially those indie hackers who’ve crafted multiple apps, often have their go-to templates. These are typically skeleton apps filled with essential functions, integrations to services like Stripe, marketing automation tools, and embody proven workflows. I’m inspired to create something similar, but with a focus on product analytics. Over the years, I’ve developed systems and methods that I’ve recycled across projects. With some refinement and expert feedback, I believe there’s potential to mold these into a valuable resource for others. Naturally, a product analytics template would center less on code and more on best practices and tool utilization.

If you share an interest in this domain and are keen on swapping ideas, I’d love to connect. Please reach out!

Why Good Developers are Always Lucky

When I was younger, I used to play a lot of chess. There is a famous quote from a former world champion in chess, Capablanca: “A good player is always lucky”. I have come to realize that this applies to software development as well.

Now let’s look at what the real meaning of this quote is. If you thought it meant chess is a game of luck, you couldn’t be more wrong. What it truly means is that good players will (sometimes unconsciously) make good moves, placing their pieces on squares where they are more active. For example, placing rooks on open files and pawns on opposite color squares than the remaining bishop are both basic strategies. Following them will eventually lead to more opportunities and perhaps one of the opportunities will lead to victory. When an opportunity like this presents itself and some tactic can be used to win the game, it sometimes feels like luck.

In software development, following certain principles and patterns will keep more opportunities open in the future. An unlucky developer will often feel that it is hard to add new features and that the features do not really fit into the existing legacy. A lucky developer will often realize that there is a very easy way to add the new feature. Part of it is just because the lucky developer was following good principles when he wrote the old code. Many years of experience will also give a good gut feeling on which way to implement certain things.

Less experienced developers often do object oriented programming wrong. Even experienced developers sometimes break SOLID principles and that may lead to various difficulties when the codebase grows. Juniors sometimes get basic inheritance completely wrong and then wonder why they have programmed themselves into a corner. Many juniors do not know when to use composition and when to use inheritance, so they inherit classes just to get access to methods they need. Not only does this become a testing hell due to lack of possibilities to inject dependencies (stubs or mocks), but it will often lead to various problems later.

Let’s look at a completely hypothetical example, one that I was able to quickly come up with. Let’s say that we are building a racing game and have decided to model how the engine accelerates at different speeds; high acceleration at the lower speeds and then slowly decreasing when it gets closer to the top speed. We have a class Engine, with method:

1
public double getAcceleration(double speed);

Now a junior will start implementing the Car class, which has the properties: coordinates and currentSpeed. Additionally, it has the following methods used to control the car:

1
2
3
public void setGasPedalDown(boolean down);

public void runUpdateCycle(int time);

The runUpdateCycle() method is supposed to run an update cycle of the game, by first getting the acceleration and applying it to the speed (in case gas pedal was down). Then calculating the new coordinates based on the speed and time.

A senior developer will obviously use composition and let engine be a member of Car, and in fact most likely program against an Engine interface in the car class instead of the concrete class. Some juniors would perhaps jump at the opportunity of just changing the Engine class to be the car class and add the missing functionality there. Some junior developers might (although perhaps not that likely) inherit Engine class to get access to the acceleration method. Both of these approaches will of course work fine, but what happens the day when the producer of the game asks for a feature to let the user switch engine?

The senior developer would just implement a setter method, unless the senior developer already had implemented such a method for for dependency injection and unit testing purposes. Now the junior developers will either have to do some refactoring or continue on a path that will require a lot more effort and eventually will lead to bigger problems. The juniors will experience a lot of bad luck.

This idea of luck can probably be applied to many other professions as well. Or why not for all things you we do in our lives. When we have constantly bad luck, perhaps we should ask ourselves: are doing something wrong?

Fully committed to new adventures

It’s quite a while since I wrote a blog post. Life has been quite hectic and that will not change any time soon, so I might just as well give a quick update on what I have been doing lately.

At the time I wrote my last post, about one and a half years ago, I was still at Kiosked and as a side project looking into a few FinTech ideas. There were two ideas I considered very seriously and almost ended up founding a company for either one of them.

The first one I considered was an investment assistant app. Most people’s life savings are going to waste because of mutual fund fees and we wanted to help them. For those unaware of this fact, let me give quick explanation. I could easily write 20 pages about this topic, but I will summarize the key points:

  • The stock market is at least somewhat efficient (if not even fully efficient), so by randomly picking stocks you are likely going to do as well as most fund manager on average.
  • Very few funds can beat the market enough to justify their fees and you cannot know in advance which fund it will be, so throwing the dice and investing in random stocks is actually a much better alternative (or to invest in the index through ETFs for example).
  • The typical 1-2% fees don’t sound high, but the total loss over a 10-20 year period can be surprisingly high due to our world’s eighth wonder, which is compound interest. After 20 years, you may easily have 30% less wealth.

So we wanted to build a tool which serves as your investment assistant so anybody can do this themselves quickly at a reasonable fixed fee. The tool was going to help users choose a strategy and pick stocks fitting the criteria as well as help reduce risk. I won’t go into too much detail if we one day end up building this product ;)

Although most people we interviewed seemed to get the idea, it seemed as if it would be hard to sell to the masses through an online channel. Selling financial products is so much built on trust and a face to face meeting is needed to sell to the mainstream market. However, when meeting customers face to face, one would almost have to sell a product with a high fee just to break even.

Regulation in Finland is very strict compared to for example Sweden, which means that piloting a product like this would be an extremely expensive operation. Moreover, there were no easily accessible APIs to brokers in Finland (this is typical in FinTech anyway), so it is a long road ahead to negotiate API access (and possibly even development) with brokers or alternatively become a broker ourselves.

Piloting abroad was an option but not an easy one with our funding situation. The market is very small here in Finland. We looked at other companies who were doing something in this field or had been doing previously. We even met with a few of them to learn a bit more. Unfortunately it turns out that every one of these companies were extremely unprofitable or already bankrupt years ago. The only exception is brokers and companies selling mutual funds and various investment instruments with very high fees (exactly what we did not want to become).

Perhaps the last nail to the coffin for this idea was that my partner decided to drop out. This venture was not something I could pursue myself alone and the the initial funding was obviously lost when there was no team. I looked a bit for co-founders or ways this could be piloted with a reasonable investment, but then another idea seemed more lucrative.

Just like we lacked proper broker APIs and that was a problem for us, it seemed to be a problem for almost any FinTech company. Plenty of FinTech companies out there would like to access bank account data for various purposes, for example personal finance analysis, lender credit score evaluation, etc.

We all knew that the PSD-2 (EU directive) was going to force banks to open their APIs by January 2018 (this date has been pushed forward and is September 2019 at the time of writing this). Many FinTech companies were doing screen scraping to access this data and some lucky ones had special deals with the banks. We also knew that the date for PSD-2 would possibly be postponed, some banks would apply for extension and even after that there would be banks choosing to accept the fines just to avoid opening up their APIs. Finally, even if the new directive would help a lot, there would be fragmentation; many banks implementing their APIs their own way. No small FinTech startup would like to integrate with 5000 different banks and different APIs in EU.

I guess most of you can expect what we wanted to build. We wanted to build a so called “API Hub”, which is one standardized endpoint to all banks. We wanted to launch this pre-PSD-2 times, solve the bank integrations with screen scraping and try to get a big enough market share before PSD-2 comes.

Always with good ideas I do customer development, contact a lot of companies who I think could be potential customers and try to learn as much as possible. Many seemed to be interested in paying a decent price for this and there seemed to really be a market for this.

The biggest drawbacks were that there was competition, some companies had a head start already and had raised quite big rounds. Competition validates the market, however, in this type of business it is a bit tricky to differentiate from the competition. There were some options though but I suspect competition will push prices down and additionally FinTech companies will start implementing some integrations with common banks themselves to save on costs.

It was a tough decision to make. I wanted to do this so badly but I did not have a co-founder for this yet. I had two advisors and some developers who would be potentially interested in helping me out at a reasonable price.

As always, one adventure leads to another. I ended up dumping this idea for an even better opportunity. One of the companies I had met for customer development was Arkkeo. The founder, Tuomas Kohila, said he has been looking for a technical co-founder for his company. Arkkeo had been building a document bank for receipts, tickets and various documents one receives from companies. He had a new big vision and plenty of interesting ideas how we could take it forward. After some meetings, emails and phone calls, I ended up joining him instead.

So far it has been really exciting. We pivoted, launched a small pilot app in NYC to validate some key assumptions and have now landed back in the FinTech field. I will write more about this soon. In short, we are on a mission to create the standard how shops and restaurants can distribute appreciation and gratitude!

Also from a tech point of view this has been super fun. We modernized the tech stack and are using React Native for the app and React for the merchant dashboard. We also built validation as a key cornerstone in our agile development process, so that everything we build will result in learning and validation of hypotheses. I will write another post explaining this process, which I think is the best way for a startup to really make sure all development produces “validated learning”. We are going as “Lean Startup” as one possibly can in the field of FinTech.

4 Reasons Customer Development in Wealth Management is Different and How to Deal With it

I noticed that very little has been written about how to do customer development in the world of wealth management and FinTech. I was searching for advice and couldn’t really find any when I jumped into my adventure. Now that I have gained some experience, I decided to write a post and share some of my observations and ideas.

In my previous blog post, I explained that I had been doing customer development for some new ideas related to wealth management. I did have earlier experiences in doing customer development in less regulated markets. I had gained my experiences from both being a real founder (Disruptive Media) and as a member of the founding team (Kiosked). Thus, I only had an understanding of how to do this stuff in less regulated markets, like photo sharing, e-commerce and online advertising. I thought the same process can directly be applied to wealth management as well. However, wealth management is a bit different.

First, let’s recap what these methodologies are. Customer Development, invented by Steve Blank, helps startup founders systematically search for a profitable and scalable business model and ensure there is product-market fit before putting all chips on the table and scaling up the company. The key idea is getting out of the building and validating whether customers really have the problem and whether they are willing to pay enough for the solution. Lean Startup is a similar approach, invented by Eric Ries, which is perhaps better suited for typical consumer web apps, where most experiments can be run online.

What both of these methodologies have in common is that they aim to minimize waste. A startup often needs to try out things quickly and learn from each experiment. When something does not work, most code (and work) goes to trash. It makes sense to minimize waste and only do what is absolutely necessary to validate or invalidate a hypothesis (or idea). The more efficiently a startup executes this process of coming up with hypotheses and testing them, the more likely the startup is to find product-market fit before it goes bankrupt.

The word hypothesis is used to distinguish ideas and assumptions from facts. They become facts once validated. Typical business hypotheses can be for example: what price the customers are willing to pay, what channels are used for distribution and cost structures. These are often easiest to document lightly in a Lean Canvas or similar.

Whichever process you use, in the end it all comes down to two steps: 1) coming up with business hypotheses and 2) validating them. Usually hypotheses are validated by conducting experiments. Sometimes this can be done by going out and meeting the customers and test selling the product. Other times it is necessary to build an MVP and measure conversion, retention and whatever else that is necessary for success.

Success in the search for a profitable and scalable business model will be determined by how well a startup performs both of these steps. The better a startup is at coming up with the right hypotheses, the fewer iterations will be needed. In the best case scenario, the first set of hypotheses happens to be right and the product immediately hits a homerun. On the other hand, the better a startup is at performing the second step (testing of hypotheses), the more iterations it can do before reaching the end of its runway.

Most of the startup literature focus on the second step, which is mostly about process and execution. It is easier to become good at this step. Learn to design experiments which will result in maximum learning with minimum effort. Good engineers also develop stuff faster and the whole engineering team can also improve its process. There are tons of books about these things so there is no point discussing it any further in this post.

It is much harder to create any formal method to come up with good hypotheses (in other words great ideas). It is a combination of experience, knowledge and creativity. It helps to have a long background in the industry or being a heavy user of similar kind of apps. A heavy user of some specific apps may have some painful unsolved problem and a vision about a solution.

I used to think that ideas don’t matter and that it’s all about execution. This is the typical mantra that is repeated everywhere. Books, like Lean Startup, give the impression that if you do not know something you just test it and find out. This is all very good advice and also applies to wealth management and FinTech. However, in certain businesses testing and validating hypotheses is harder and more expensive. Wealth management and many FinTech businesses unfortunately belong to this category.

Why is this the case for wealth management?

1. Trust is everything

One reason is that it is a business where the customer’s trust is everything. This means that your sales conversions are completely dependent on how much people trust you and your brand. Hence, putting up a landing page to test conversions will not necessarily be a reliable experiment to validate whether a wealth management solution will have enough interest.

Since it’s all about trust, a startup’s success will depend a lot on how successful the company will be in establishing the trust within the core audience. Most likely building such trust is going to take some time and this is very hard to test in advance. Likewise with conversion rates and willingness to pay, they are both dependent on people’s trust.

Of course trust in brand is important in any business and has huge impact on conversion rates everywhere. Still, I think wealth management is at the high end of the scale. I feel that when doing customer development in some other markets, it is much easier to find earlyvangelists and visionary customers who are willing to be the first to try new things more eagerly. In fact, they often prefer that others have not yet discovered the new product and therefore do not want to see references either. However, in wealth management also earlyvangelists and visionary customers want to be sure it’s a trustworthy service and say that they would like to hear that a friend used it or something similar.

2. People don’t want to share financial information with strangers

Another thing to keep in mind is that people are not really open to share all financials with strangers. Additionally, people are a bit embarrassed to admit how unprofessionally they handle their own investments and often try to give a better picture of what they are actually doing. This adds up to the uncertainty.

3. Regulation

A third difficulty is regulation. In many countries FinTech startups can run small closed and controlled experiments without all the necessary licenses. This is unfortunately not the case in Finland. Depending on what you want to do, licenses can take a very long time to obtain. There may also be requirements for the company’s staff and its financials. This is extremely bad news if you just want to test something quickly with the help of an MVP.

4. Things happen slowly

Last but not least, things happen very slowly in this field. Banks and other financial institutions move very slowly and it can take ages to get any partnerships done with anyone. The situation is even worse if you need any integrations with them.

SOLUTION

Since the 2-step iterative process of coming up with hypotheses and validating them can be extremely slow and expensive, we either have to succeed with fewer iterations or accept greater risk. Either you run the iterative process and make decisions based on very inaccurate data or you just accept bigger risk and hope you are right. I have a strong feeling that in this business one just needs to take a bigger leap of faith and have a lot more funding at an earlier stage.

What does this mean?

I think it means that startups need to put way more value on the idea phase (coming up with good hypotheses). I think it is important to get the initial “guess” close enough, so that potential pivots will not need new licenses or new deals and integrations with slowly moving banks and other big players.

How to ensure that the initial “guess” is close enough?

I think that especially in wealth management and FinTech, it is vital to have people on the team and advisors who have a background in the industry. People who can help make sure some details have not been overlooked, which could kill the business later. Additionally, I think it is a good idea to make sure there is room for error in the initial hypotheses because there probably is. In other words, the business must look extremely profitable on the spreadsheet, so that the actual reality will also be acceptable.

I want to still say that what I just wrote also depends largely on what kind of a startup we are talking about. There may also be some concepts in wealth management which can easily be pivoted in any direction without that much hassle. Stock pickers, alternative data services and various calculators and expert tools are not under regulation and fall into this category.

Founders should be aware of the stale nature of the field and understand that pivots here and there are not that easy. When I got in, I underestimated the impact of the regulation and how slowly things go forward in this field.

CONCLUSION

So as a conclusions, this business is very stale and trying out anything is expensive and time consuming. Therefore, startups cannot as easily run experiments and pivot here and there. Instead, startups need to have an experienced team, preferably with industry background. This way it can improve its odds of being close enough with its initial hypotheses, so that no major pivots are needed.

This is my current understanding of this business, I’d be happy to hear what you think?

Scratching the surface of FinTech

It’s time for me to give an update on what I have been up to lately. I was lucky and had the opportunity to take a break from my daily work and take a look at a few market opportunities in FinTech.

Already during 2014 and 2015 I tried out some new concepts related to trading stocks, mainly TradingDrill.com and some ideas related to sentiment and alternative data. We launched a low-fidelity MVP to test initial demand and how the channel to reach customers worked. Thereafter we continued doing customer development and interviewed everyone who signed up and could be reached. Unfortunately, we learned that the business model was not profitable and the customer acquisition costs were quite high.

We turned a few stones and investigated some related ideas which came up during the customer development process. Ultimately the interest faded away. Regular work had the highest priority and almost required our 24/7 attention. So the ideas were put on hold sometime late 2015.

At a startup event, a few months ago, I happened to meet Petri Asunmaa. He was pitching his idea about a tool to help stock investors. We started discussing after the event and eventually decided to join forces and figure out what could be done. This time it was about investing and not trading. There was huge potential in the field.

We started doing customer development almost by the book. So far during this process we have interviewed about 100 potential customers, banks, brokers and various players in the industry. Petri’s blog post summarizes quite well our findings. We also pivoted our concept a few times, starting from a tool for non-professional “do it yourself” investors to a wealth management solution. We gained a significant understanding of the business and how customers really behave and the reality is quite surprising.

Additionally, we learned a lot about the difficulties of FinTech startups entering the market. Typical Lean Startup methodologies are difficult to execute and they need minor adjustments due to the nature of the field. Partnerships and integrations with existing big players take a very long time, not to mention the regulation and applying for all necessary licenses. You cannot just one morning decide you want to test if customers are interested in buying stocks through you and deploy your MVP code in the afternoon, measure and learn. No, you need a different approach in this business.

I will write a separate blog post about lessons learned about customer development in wealth management and FinTech. The biggest takeaway is probably that one needs to take a bigger leap of faith than in other software businesses. This usually comes in the form of bigger initial investments as well as larger and slower MVPs.

During the process, many other good ideas came up. Some related to trading or wealth management, others related to disruptions in the industry, like PSD-2. We are now working at a better plan how to move forward with a few smaller leaps of faith instead of a gigantic one.

We are interested to continue discussing with other companies within wealth management and see if we could find ways to validate our ideas more easily and do something together. Both with new startups and existing players.

PHOTO EDITED FROM WIKIMEDIA COMMONS USER MORITZ WICKENDORF

The Disruptive Adventure – A Happy Ending After All

A long time ago I wrote a series of posts about my own startup. For the readers with more time and patience, I suggest you pour a glass of scotch (or whatever your favorite drink is) and start reading from part 1/3. I certainly had my fair share of scotch when I wrote it.

For the rest of you, I’ll just give a quick summary:

Together with my co-founder Jarl Törnroos, we built a photosharing website, which allowed users to gather their photos around shared events. All events together formed the collaboratively documented story of each and every user’s lives.

This did not become a major success for various reasons explained in more detail in the full post. At that time we thought monetization was our biggest problem to solve. We could not have been more wrong and monetization should have been the least of our worries with the small user base we had. Anyway, we wanted to monetize our small user base by selling print products (like photobooks) from the photos people had uploaded to the service, and that is what we did.

There were not really good 3rd party solutions we could use for this purpose, so we set up to build our own. That eventually led to building a full blown e-commerce solution specialized for selling print products. Here we kind of pivoted, although in practice it was more like branching or spinning off a new business.

We productized the e-commerce solution and sold it (as SaaS) to a few photo shops who needed an online store. Unfortunately this was not really a good business either and it was hard to compete with existing big players like Fuji, who already had way over 50% market share and sold a full service, including an e-commerce solution, photo machines, paper, etc.

We had to try something and the next thing on the list was the hypothesis that other websites and photosharing sites would also want to monetize their images by allowing users to buy them, just like we wanted and did with ours. We launched together with Riemurasia, which was one of Finland’s biggest websites at that time. Now this generated real revenue at first. Unfortunately we encountered various difficulties, which we were unable to overcome with the money we had left.

That forced us into doing consultancy, which actually is nothing else than a safe way to fail. So we built whatever software anyone was willing to pay us for. We took projects from various clients. One of the projects was for the guys who are behind Kiosked. Eventually, we put aside all our own other businesses and joined them as partners when they founded the company Kiosked.

This is where the story ended in my previous posts and a lot of things have happened after this.

The e-commerce solution had been running for itself for about 4-5 years. Almost no maintenance required and very few incidents as long as we remembered to upgrade the servers before the huge peak in usage before Christmas. Perhaps the bureaucracy caused the biggest headaches, which consisted of sending invoices to customers, filing VAT reports, etc.

Now last year (2015), we sold the whole e-commerce solution to MV-Kuvat, a photo shop who had been our customer ever since the beginning. Their engineers are taking care of all future development work and will be able to customize it better for their own needs.

Also worth to mention is that today we closed the Pix’n’Pals photosharing community, which was founded 9 years ago. Although that is very sad, it frees up room in our minds to focus on other things. Sometimes it’s good to cut your losses and move on.

The exit for our e-commerce solution was far from anything you usually read in Techcrunch. No Ferraris or private jets for us. Still, it was an exit, which provides some kind of closure and recognition that what we had built was really something of value. That feels good and that is worth to celebrate!

Refurbished my Blog - No More Wordpress

It’s way over a year ago since I wrote anything here. Not a single post in 2015 and only one in 2014.

Life has been quite hectic lately. Hard work at Kiosked and four kids at home leave very little room for additional hobbies. Still, I have had the time to write two posts on Kiosked’s blog about microservices:

Discussing pros and cons of microservices:
http://blog.kiosked.com/en/blog/to-microservice-or-not-that-is-the-question/

Kiosked’s approach to microservices:
http://blog.kiosked.com/en/blog/kioskeds-approach-to-microservices/

Now it’s time to get back on track and continue writing. But first things first. It was time to get rid of the damn Wordpress. Wordpress has been a liability. It is all the time under attack. Mostly various DDoS attacks on weak points like xmlrpc.php, etc. But there’s more. The technical design of wordpress is really flawed. All PHP files are public, including all the plugins which may have their own security holes as well. It’s a nightmare. I don’t have time to deal with that kind of shit.

Now I have switched to Hexo. Thanks to Perry and Daniel for the recommendation. Although I am not too keen on writing in markdown, I will sleep a lot better during nights when I am just running plain HTML files instead of a buggy Wordpress.

Ok, this was a short post. I have learned that the best way to get started is to just do something quickly, even if it is small. So more posts will follow…