Project Everest

Adopted Experiment

[JAN 19] Content Satisfaction (Solution) - Fiji FarmEd I

Liz Watson
Submitted by
Liz Watson | Jan 7, 2019 | in FarmEd - Fiji

Lean Phase:

Solution (MVP)


Assumption :

Farmers are satisfied with the information the app delivers


Time Box :

2 Weeks


Key Metrics :


Using questions with a scale out of 5 that measure the farmers' satisfaction with the information provided on the application. Average scores higher than 3 will support the assumption that the user is adequately satisfied with the information.


Success Point - If 60% of farmers approached who complete the questionnaire rate the application as satisfactory (average score above 3). As this proves the information is of satisfactory detail and adequately fits the farmers' needs.


Green Light - Proceed (improving + develop channels and continue to add more information and resources to the application )


Orange Light Range - 40-60%


Orange Light - The app is not providing enough key information that the farmers need. During the satisfaction surveys, information will be gathered on what areas, in particular, need improvement and in what ways they can be improved and this and used to guide the update for future versions of the app. Once information is altered to fit the needs of the farmers, redistribute the app and repeat the questionnaire at a later date.


Failure Point - if 40% of farmers approached who complete the questionnaire rate the application as satisfactory (average score above 3). As this proves the information is of not satisfactory detail and does not adequately fits the farmers' needs.  


Red Light Failure Protocol - The information provided by the app does not adequately fit the needs of those completing the survey

  1. Review the survey questions

  2. Review the needs of those who completed the survey

  3. Review the customer archetypes

  4. Determine if there are areas of information is missing

  5. Use data gathered to guide other future actions


Experiment build :


Pre-departure :

  • Review the HubSpot profiles of those living in the village you are visiting.


During :


Post-departure :

  • Reiterate the survey design after completing every 5 surveys (especially focus on the scale differentiators)
  • Collate data from the questionnaires.
  • Average out all the responses to see which sections are lacking satisfactory information and which responses are providing all the adequate information
edited on 13th January 2019, 21:01 by Jess Riley

Fiona Aaron Jan 8, 2019

I really like the premise behind this experiment. It is important to keep in mind though that considering the app hasn't launched properly yet, 2 weeks might not be enough time for the farmers to be able to use all the functions of the app as intended and that might sway the results. Might be interesting to see what they rate the different features and then ask whether they've actually had a need to use that feature in the past 2 weeks (i.e. they might not have encountered any pests or diseases so might not have needed to use this).

I also feel that the metrics are a bit harsh. Failure is if 40% rates it as satisfactory but could use improvements? To me, a satisfactory service would be meeting some of my criteria as a customer but not all and therefore needs improvement. I would like to see the failure metric adjusted to be more along the lines of if X% of farmers dislike or don't find any value in the features we currently have and therefore feel their $1 per month is wasted then that's failure.

What are your thoughts?

Users tagged:

Reply 2

Jess Riley Jan 8, 2019

Hi Team,

Can you please attach your questionnaire as a PDF rather than a link to the drive so that our wider community are able to engage and provide feedback on your survey?

I am really excited for this experiment but I wonder if you could clarify for me the assumption? Are we assuming that our information is satisfactory or are we assuming that the information we are providing is valuable enough for farmers to a) pay for, b) actively engage with our service regularly, c) recommend that others engage with the service? I would argue the latter and as an extension say that the satisfaction tracker is our method ("how") of measuring this value. Keen to hear what you think of the relationship between satisfaction and value and whether we can make assumptions around this. I believe it will be a good indicator of our value from the perspective of our customers.

After looking at your questionnaire I am interested in how you are keeping and tracking your data? Are you going to move the questionnaire on to a Google Form or Survey Monkey? If this hasn't been considered I would STRONGLY recommend as I know how easy it is for data to get lost and this data is invaluable to us and to perfecting our product.

Also with your questionnaire be careful with asking "is the information correct" as it is likely they have no measure to this, hence the need for our solution. This will be explored down the line once the information/advice is able to be implemented by the Farmers and is being checked and double checked by our RnD team as well.

Similarly to Fiona I agree that the metrics need to be revisited. Also with Fi's point on the application not yet being readily available, this experiment could be used in the mean time to validate the features of the application that we know are included and gauge which features our customers are most looking forward to engaging with, as a way to indicate to our RnD team which areas would be most beneficial to focus on in the coming months of the application build? You could do a before the app and after the app with the questionnaire as it is to a) measure value, b) manage expectations until the beta is released and as it is updated? Or you could just wait but I think there is still good data to be collected..

Reply 2

James Balzer Jan 10, 2019

I agree with most of these points @jess riley. However, just be careful when it comes to logging data into a Google Forms document. From my experience with FarmEd in July, we stuffed up our Google Forms document so much, as we didn't have the means through which to record our data accurately in a long term, consistent manner. As long as the people recording and logging the data are very careful to make sure they know how to accurately record data in a standardised, long term manner, then they should be good.

Reply 0

Jess Riley Jan 8, 2019

Status label added: Experiment adopted

Reply 0

Jess Riley Jan 8, 2019

Jessie I'm also keen to hear where you're at with this!

Users tagged:

Reply 0

James Balzer Jan 11, 2019

I would just be careful on having anecdotal, subjective and non-tangible means of assessing the user experience of the app for the farmers. The standard 1-5 measurement is fine if you can have an objective and tangible means of assessing what number on the scale the user experience should be associated with as per the UX testing. If you're kind of just taking a guess, it makes the data gathering process inaccurate and therefore the conclusions not particularly clear.

Reply 0

Felix Zerbib Jan 15, 2019

Hi Jimmy. Our 1-5 scale is actually our attempt to collect more accurate data about the farmers' user experience, and is based on our teams' assessment of how farmers interact with the app, and how they react to the features and value we provide. I realise how much of an emphasis we need to have on objective measurement but as I'm sure you've experienced in the past, the usual close-ended questions we tend to ask don't actually provide useful information due to the social pressure in one-on-one surveying that our potential customers exhibit that generally leads them positively affirm the value we offer - rather than provide us with critical feedback.
If you had any other tips based on your experience in Timor as to how this can be avoided I would gladly welcome that as we're currently deep into the surveying phase.

Users tagged:

Reply 0

Jess Riley Jan 13, 2019

Status labels added: Proposed Experiment, Under Review

Status label removed: Experiment adopted

Reply 0

Jess Riley Jan 15, 2019

Status label added: Experiment adopted

Status labels removed: Proposed Experiment, Under Review

Reply 0

Brodie Leeson Jan 15, 2019

Agree with the great feedback being posted here so no need for me to go into it. I love how in depth the feedback is because it shows the value we perceive in this experiment. I would love to see the feedback here addressed and the experiment being conducted ASAP now that you have the app. While we work through some of the remaining aspects of it in preparation for release to customers this looks to be an excellent use of time.

Reply 1

Felix Zerbib Jan 15, 2019

Glad to let you know that this experiment was adopted as of yesterday morning, and we're aiming to get as much feedback as possible from interactions with farmers. We've prepped an experiment outcomes draft and that will be posted soon with regular updates to our findings.

Reply 1