Take the Research Challenge: Guess these Study Outcomes

Test your instincts with real research turned into a fun quiz.
- Each scenario below is drawn from a published study in marketing, supply chain, or education.
- Read the summary carefully and pick the outcome you think the researchers found.
- Answers and explanations (with the original research sources) are revealed at the end.
- For each research study summary below, choose the most likely outcome – A, B, or C.
- Then check the answers section to see what actually happened and why.
Study 1: Call-to-Action Button Shape and Clicks

A team of marketing researchers compared two versions of an e-commerce landing page: one with a curved (rounded) “Buy Now” button and one with a sharp-angled (square) button. They measured the click-through rate (CTR) for each design.
What happened?
A) The curved button significantly increased clicks.
B) The curved button significantly decreased clicks.
C) There was no measurable difference in clicks.
Study 2: Lightning-Fast Shipping and Returns

An online fashion retailer analysed customer behaviour based on delivery speed. They found that some orders arrived much faster than average (due to local warehouses). The researchers checked how delivery speed affected the likelihood of customers returning the product.
What was the result?
A) Return rates increased when delivery was faster than usual.
B) Return rates decreased with faster delivery.
C) No change in return rates.
Study 3: Learning Styles vs. Exam Scores

A meta-analysis examined hundreds of students across many studies to test the “learning styles” idea (teaching to a student’s preferred style, like visual or auditory). Researchers looked at performance when instruction matched each student’s style versus when it didn’t.
What did they find?
A) Matched instruction significantly improved exam scores.
B) Matched instruction gave a small improvement in scores.
C) There was no meaningful difference in scores.
Study 4: “Healthy” Label on Yoghurt

A food marketing team tested yoghurt packaging. One design prominently touted the product as ‘Healthy”, another emphasised “Great Taste”, a third had both labels, and a control had none. They measured how much more or less consumers were willing to pay for the yoghurt.
What happened to willingness-to-pay when the label “Healthy” was used?
A) “Healthy” label increased willingness to pay (people paid more).
B) “Healthy” label decreased willingness to pay (people paid less).
C) “Healthy” label did not affect the price people would pay.
Study 5: AI Chatbots in Buyer–Supplier Negotiations

A procurement experiment tested an AI chatbot (custom-trained ChatGPT) acting as the buyer in supplier negotiations. The chatbot was prompted in two different styles: a competitive (hard-nosed) negotiation style versus a collaborative (friendly) style.
The researchers measured deal outcomes like price discount and negotiation speed, as well as supplier trust and satisfaction.
What happened?
A) The competitive chatbot achieved a higher price discount and quicker deals.
B) The competitive chatbot achieved a lower price discount and slower deals.
C) No measurable difference in deal outcomes.
Study 6: Handwritten vs. Typed Lecture Notes

Education researchers compared tertiary education students taking notes by hand on paper to those typing notes on laptops. They then tested learning outcomes on the same material.
What was the result?
A) Handwritten-note students scored higher on subsequent tests.
B) Typed-note students scored higher on subsequent tests.
C) There was no difference between groups.
Answers & Insights

Study 1:
A) Clicks increased for the curved button.
Contrary to a common assumption that flashy changes may not matter, this large field study found curved (rounded) buttons attracted more clicks than sharp-edged buttons.
The researchers attribute this to visual appeal: curves create a more approachable feel, motivating users to click.
In short, a subtle design tweak (rounded shape) did boost CTR in real-world experiments.
Read more about the original research here.
Study 2:
A) Return rates increased with faster shipping.
Surprisingly, customers who got their orders especially quickly were more likely to return them.
In fact, the study found “the faster a product is delivered, the greater the likelihood that it is returned”.
The authors explain this by cognitive dissonance: quick delivery leaves shoppers with less time to rationalise the purchase, so they may second-guess and return the item.
(This effect was strongest for first-time buyers.)
Read more about the original research here.
Study 3:
C) No meaningful difference.
Despite the popularity of learning styles in education, the evidence shows little payoff for matching teaching to a purported learning style.
A recent meta-analysis found only a very small overall effect and noted that just about one-quarter of outcome measures showed any benefit.
In practice, the gains are “too small and too infrequent to warrant” tailoring lessons to individual styles.
Put simply, teaching to a stated learning style didn’t noticeably boost exam scores.
Read more about the original research here.
Study 4:
B) Decreased willingness to pay.
In this field experiment, slapping “Healthy” on the label actually backfired – consumers offered about 18% less for the yoghurt labelled “Healthy” compared to the plain control.
The combo “Healthy + Great Taste” label dropped the price even more.
It seems “healthy” made shoppers worry about taste or question what “healthy” meant.
In other words, the health claim made the product feel less indulgent unless it was backed by an explanation.
Read more about the original research here.
Study 5:
A) Competitive AI yielded higher discounts (but lower trust).
The AI-chatbot experiment showed that when the ChatGPT-based buyer was scripted to be competitive, it indeed achieved better deal terms (larger price discounts and quicker negotiations).
However, suppliers reported significantly more trust and satisfaction when the AI was prompted to be collaborative.
In other words, an aggressive AI approach cut costs for the buyer, but a friendly AI approach built stronger supplier relationships.
(So there was a clear trade-off: using a hard-nosed chatbot got lower prices at the expense of supplier goodwill.)
Read more about the original research here.
Study 6:
A) Handwritten notes gave better scores.
The data strongly favoured pen-and-paper.
Students who took notes by hand performed better on later tests than those who typed.
One meta-analysis found that about 9.5% of the handwriting group earned top grades (As) versus only 6% of the typing group.
In other words, handwriting your notes boosts retention – likely because writing forces more processing and summarisation.
Read more about the original research here.
How did you do?
These findings show how counterintuitive research results can be.
From design tweaks and delivery speed to education tricks, the “obvious” guess isn’t always right.
Testing and data are essential for seeing what actually works.
Whether you nailed each one or got stumped, hopefully, these real studies gave you a fresh perspective on determining their findings that you can use in your own work.
Share your thoughts on this article or send your own article or content proposals to Dr Helena van Wyk at helenavw@immgsm.ac.za.