Crashing into growth marketing: a CEO journey. Part 2: growth experiments.

Johanna-Mai Riismaa
10 min readNov 28, 2020

This is the second post of many to document my journey as early-stage startup CEO through the growth marketing minidegree by CXL Institute — a 12-week online program about the practicalities of growth marketing.

Disclaimer: I’ve done intense learning sprints before. I’ve never touched marketing from a practical perspective before this course. Consult a specialist before trying this at home.

Promo insert: My startup is called Zelos, it’s an app that helps you manage tasks and distribute activities with a very large group or community, and you can sign up here for free: getzelos.com

Homework: keywords.

I spent the whole weekend looking at keywords. There’s one million different tools you can use to either generate or validate keywords. The good ones cost quite a bit of money, but I was able to get a basic experience without cashing out extra.

I went to Google keyword planner at first, and tried to come up with good words that people might put into the search engine to find our product. Important lesson learned instantly — nobody else thinks about your product like you do. Nobody is actively looking for your product with the keywords you’ve made up yourself.

Long story short — I got really bad results with my favourite words. Nobody is looking for “internal talent management software”, it’s a thing only in those three Deloitte / Gartner articles that speak in HR geek. Nobody looks for “gig management” or “task dispatch” either. These are phrases that are great descriptions for our product, but they’re not describing the problems that people are having.

With a little practice and looking at the suggestions from the keyword planner, I eventually got better with this global version of Family Feud, and eventually I managed to guess quite a number of more popular searches. I created a spreadsheet with these phrases and then looked for more data about them.

Google keyword planner gives pretty vague numbers about the average monthly searches. Many of my good picks were in the space of 100–1K, and it’s a huge difference if this phrase gets 100 or 999 searches per month. I got more specific numbers using the Ahrefs keyword generator, proving that most of my keywords were disappointingly close to 100 searches and really far away from one thousand. Ahrefs also displayed keyword difficulty, and of course the keywords with more volume were with a higher difficulty rating.

I also tried a tool called Serpstat that let me do a few free searches on their engine — mostly their results were pretty similar to the data I got from Ahrefs, with some additional numbers like competition rating and possible cost per click (which was radically different from the CPC that I got from Google!) I will also need to figure out why some keywords give me negative CPC on Google (What does this mean? Will they pay me for the clicks?). Maybe it will all become clear once I get to the keyword advertising part, actually paying for the clicks.

Altogether I did three rounds of generating + validating + comparing keywords, every time coming up with an improved strategy on what might work. I had to re-visit the process quite a few times because the results depend on the location and language, and many times I forgot to set these in the tools, resulting in incomparable results from different areas.

Finally, I now have a spreadsheet of 50+ keywords with comparable volume/difficulty data for the same region. Some with relatively high volume AND low difficulty! There it is, I’ve figured out my very first set of keywords, definitely a top achievement for this week!

Section 2: Running growth experiments

The second chapter of the minidegree is about running growth experiments. I only manage to go through half of the chapter during the week because I got actively side-tracked.

Side project of the week number 1: experiment priority sheet

Many of the chapters reference a visiting lecture or two in the beginning, usually live recordings from CXL events with industry professionals. I get instantly side-tracked by a presentation by Hotwire product managers where they go in detail about how they prioritise experiments with a point-scoring framework. Obviously I spend the next several hours setting up a scoreboard in Google Sheets. What objective rules do you need in place to prioritise an idea over another?

Their recommended categories are:

  1. Reach: how many people will be impacted by the change?
  2. Lift: what reason do we have to expect a strong reaction?
  3. Strategic fit: how aligned are the proposed changes with core direction?
  4. Resources: how much creative and technical power would the change need?

Is the change above the fold? One point. Is the change backed up by data? One point. Is it targeting your one core metric or problem? Point. Never give an instant green light for an idea that does not get a high score on this board.

When the chapter lessons go deeper into the prioritisation, I write down:

Start with the problem, and never test stupid shit!

It’s really easy to start thinking of solutions without really having a problem. Everyone has “good ideas”. If they’re not solving anything, they’re not worth your time.

Research before you experiment

Conversion rate optimisation is 80% research and 20% experimentation. I go through two courses on the Research XL framework that pick apart the following six analysis categories that help with growth motivation.

  1. Tech analysis

First and foremost: is shit broken? Although this is pretty obvious — people won’t interact if your website or product doesn’t work for them — there’s a lot of ideas here how to really know that shit’s broken. People will not tell you the page won’t work on their browser, or that it doesn’t display right on their specifically weird resolution. They’ll just exit the page and go look at cat pictures on Instagram instead.

While we do have the experience that those couple of people who have extra weird resolutions like vertical displays with 4K resolution will totally go the extra mile to come and tell you it doesn’t work. And as they’re also super special with their ability to block all analytics tracking, you wouldn’t know unless they told you. But in case of the common folk, regular analytics will point out if the vast majority of people using Safari are failing on your site.

2. Heuristic analysis
— yeah, OK, totally time for a side project

Side project of the week number 2: website analysis

Heuristic analysis kind of means basic inspection. Just look at your thing and try to evaluate whether it’s matching the criteria of “a good thing”. And it’s best to set specific criteria and get multiple opinions.

There’s a lot of material about different ways to go about this analysis, and it’s a topic in multiple lectures . There are many ways and models to score — general suggestions, branded frameworks, common sense — so create another Google Sheet and just keep adding stuff from all different methods to my scoreboard. At some point it’s a lot. Does the page you are looking at communicate relevance, clarity, motivation, urgency, stimulance, orientation, confirmation, convenience, friction, anxiety, security, distraction…

Better clean it up, many of these are really the same thing from a different perspective. I finally end up with my own favourite five categories (with a lot of sub-questions):

  1. Relevancy — is the page what the user expected? Does it have the same scent and feeling as the ad or link they clicked on? Does the content match the headline?
  2. Clarity — does the user understand what’s on the page? Is there a clear direction to action? Is it easy to grasp?
  3. Friction — are there any difficult hurdles on the way of navigation? Is there a seven-page form to fill? Is the only contact information a landline phone number? Does the site look shady and insecure? What are the annoying things that would make the user quit early?
  4. Distraction — what’s the irrelevant extra on the page? What doesn’t need to be there?
  5. Motivation — is it clear why the user should continue? Is there an urgency, is there a good deal available for them?

We should totally go over our site with the team (and maybe pester some family members to give their thoughts). I create a new whiteboard for the team (we use Miro and it’s a pretty awesome product) and paste in the long screenshots of all our main site pages (home, product, pricing etc).

Already the first page analysis drives me mad. Why did we ever think we had a good site? I can totally get into the mindset of a confused user: “I have no idea what I’m supposed to do here, nor do I understand what the product is”. Whoa, let’s dial down the overthinking part. It’s fine. Look at the data. Most of the users manage just fine. We have people going through these pages daily with success.

But a few really good ideas for adjustments do arise. And guess who already has a scoreboard ready for new experiments!

Right, on with the six pillars of Research XL:

3. Digital analytics

Funnels, goals, analytics. What are the users doing? Which actions can be related to better conversions? Where are the leaks? Yeah, totally time for a side project.

Side project of the week number 3 — funnel building

I’m extremely glad that I’m not starting from scratch with our website, and that we already have a good setup with content and traffic that I can look at. I thought I knew the basics of Google Analytics (I use it monthly for business reporting), but oh how wrong I was. There’s a lot to discover, and I end up spending a long evening just looking at different reports of how people interact with our website.

I have in my notes: one single goal for each page! We’re definitely giving our users too many options, and there’s too many different paths taken on the site. I go back in the website analysis document and highlight the words “direction to action”

We already have a bunch of stuff already set up on our analytics, but I’m not yet confident enough to mess with the existing configuration. But I have another website! My husband writes 500-page novels of history fiction, and we have an e-commerce site where you can buy the books directly from the author, with an inscription and autograph. (If you read Estonian, you should totally check it out here: vihmakass.ee)

I don’t have a clear activity log to report on what exactly happens next and in which order, but I follow multiple articles on support.google.com and manage to set up a bunch of things that should contribute towards an excellent digital analysis toolkit. Basically I’m just messing around for a very long time.

Eventually I get stuck with setting up detailed e-commerce tracking in tag manager because I don’t yet feel confident about directly editing code on the site. But I did manage to set up basic goals to see conversion rates and analyse the path from front page to cart checkout! I think.

Right, on with the six pillars of Research XL:

4. Mouse tracking and form analytics

We have used the Hotjar software before, so I’ve seen heatmaps and click maps of our users trying to navigate the site. And from there I already know they don’t bother to scroll down too much. Out of the tips and tricks I get a small clue for this — our main page has different background colors for the scrolling segments. But single-color backgrounds get longer scrolls. Writing this down on our new experiment board!

5. Qualitative surveys

Asking questions from those who just bought, and those who left without buying. I get some conflicting advice about the question format . One of the teachers recommends closed-ended questions for surveys because they’re easier to answer. Another one tells me questions should also be open-ended so you get also the actual answers that you’re not expecting to get.

But in the end there’s a format I really like that starts with a simple Yes/No and then presents the actual question. Once people have already given the simple answer, they’re already in the mindset for finishing the activity, and more likely to type in the next answer. Something like this:

  1. Is there anything holding you back right now from signing up? (Y/N)
  2. What’s holding you back from signing up right now? (Open)

I make a lot of notes about the survey section, but eventually I just keep underlining the same concept: define the purpose of the survey in advance and collect only actionable data. Don’t ask out of curiosity, the answering capacity of your audience is limited. Instead, really think about what you will want to do with the answers.

6. User testing

We’ve often had friends come over and try to use our site and product. But I didn’t know about all the services where you can hire people to test your thing remotely and online. My core note about testing is: ignore what they say, look at what they do. So many examples of people struggling to use the product, then leaving stellar reviews about how simple it was.

Eventually, we get to a bit of actual testing.

Something I didn’t know: a test can declare a winner only when all of the following apply:

  1. You’ve reached your sample size set in advance (don’t run tests with low volume + define the necessary sample size with an appropriate calculator)
  2. The test has run multiple full business cycles (dont’ stop mid-week, don’t test single week)
  3. The results show a statistical significance.

Yeah, we probably don’t have enough traffic volume to run tests yet. Next week’s lessons will probably be.. very theoretical?

--

--