In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
For the past 15 years, customer satisfaction has been primarily measured using the same methods. The principle of surveying consumers by asking an NPS question has been proven effective and reliable. However, in a more complex world, it becomes a necessity to look further, and deeper. Additionally, predicting customer behavior has become an increasingly influential need for companies, and surveying can be a valuable source of data.
Based on a bike-sharing service, I illustrate how traditional customer satisfaction metrics can be improved and validate the proposed solutions through user tests.
The existence of many companies is dependent on a number of satisfied customers. Customers are the key factor in companies’ development. Thus, firms need to provide valuable and unique customer experience, that will satisfy their clients’ needs. This satisfaction includes not only the feelings associated with the purchasing process but also the atmosphere before and after the execution of purchases. 
In today’s world, when companies want to win new customers and not lose them, customer loyalty has become crucial for staying relevant in any industry. In recent years, customer satisfaction has been primarily measured using net promoter score (NPS). NPS, - first introduced by Fred Reichheld in 2003 - is a way of quickly rating customers’ happiness with a given product or service. NPS has become a widely recognized measurement tool, increasingly adopted by organizations globally. It helps understand customers’ loyalty towards a brand and whether they are more or less likely to promote a company .
Cultural differences can affect customer feedback; and in today’s increasingly complex world, there is a necessity for further and more profound customer satisfaction analysis. Predicting customer behavior has also become an increasingly prominent need for e-commerce companies, and surveying can be a valuable source of data.
Based on user feedback analysis of a bike sharing service, this work aims to suggest a more accurate way of measuring customer satisfaction. To this end, a prototype of a survey was designed, and proposed solutions were validated through qualitative user tests.
My bachelor thesis will examine the topic of ‘Improving UX of customer satisfaction metrics through conversational surveys’. First, I would like to focus on the definition of customer satisfaction, its importance, and the types of parameters for its measurement.
In his book Marketing Metrics, Paul Farris defines customer satisfaction as ‘the number of customers, or percentage of total customers, whose reported experience with a firm, its products, or its services exceeds specified satisfaction goals’.  In other words, customer satisfaction is how satisfied a customer is after doing business with a company. It not only shows how happy customers are with a particular product or service but also reflects their overall experience with the company. Most importantly, the book also says that customer satisfaction is a leading indicator of consumer purchase intentions and loyalty. Leading indicators are measurable factors that precede an event or lead to a result. In this case, they precede and build up to increasing buying intentions and product loyalty.
A business owner should never ignore the importance of customer satisfaction, as it is one of the most critical factors behind the success or failure of a business. It is therefore essential to measure customer satisfaction and improve on it - to make customers more loyal. If a company does not care about customers’ satisfaction, it should not expect them to care about their services or products.
We can say that if a company does not measure customer satisfaction, it cannot identify unhappy customers. If a company does not know who is unhappy, it cannot figure out why they are unhappy, and the customers can leave. If customers are lost faster than won, the business will fail.
According to research by Esteban Kolsky, 13% of unhappy customers will share their complaint with 15 or more people. For every dissatisfied customer who complains, there are approximately 25 other people who are unhappy with the company but do not voice their opinion . Those are clients the company will most likely lose if the company if it does not take proper action. Therefore tracking customer satisfaction metrics is essential. A customer complaint highlights a problem - be it an issue with the product or an internal process; and by learning about this problem directly from customers, it can be investigated and improved, avoiding any further complaints in the future. Companies that prioritize customer satisfaction grow and increase revenue. In fact, a study by Harvard Business Review found that customers who have a complaint handled in less than five minutes go on to spend more on future purchases . It means that customer complaint can become very profitable when their problem can be resolved quickly. Ideally, a business should continuously seek feedback to improve customer satisfaction.
«When customers share their story, they’re not just sharing pain points. They’re actually teaching you how to make your product, service, and business better. Customer service organization should be designed to effectively communicate those issues.»
Kristin Smaby, Customer Service Expert 
We determined that customer satisfaction is important for business and should be measured. Now let us speak about its metrics.
Every method of collecting data on customer satisfaction comes down to a customer survey.
With the help of digital analytics tools, we can identify if users research what they purchase, how they interact with different features of a product and react to possible issues. However, we cannot evaluate their emotional response. Measuring customer satisfaction through customer surveys enables us to look at emotional reactions.
Luckily, customer satisfaction measurement tools help a company collect valuable feedback. As a result, it can introduce the very changes and improvements a customer was asking for. It creates a better experience and a more pleasant customer journey.
What does a customer satisfaction survey look like? What kind of questions do you ask? How do you determine a customer satisfaction score? Let us try to answer these questions by mentioning popular metrics and schools.
«Measurement is the first step that leads to control and, eventually, to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.»
H. James Harrington, 40 years project manager of IBM 
Customer Satisfaction Score is the most common of customer satisfaction survey methodologies. It measures customer satisfaction with a business, service, purchase, or interaction. Customers are asked to rate their satisfaction on a linear scale - from 1 to 3, 1 to 5, or 1 to 10. There is no universal agreement on which range is best to use.
A big advantage of Customer Satisfaction Score lies in its simplicity: It is an easy way to close the loop on customer interaction and determine whether or not it was effective in producing customer happiness.
Customer Effort Score (CES)
Customer Effort Score is very similar to customer satisfaction score, but instead of asking how satisfied customers are, they are required to rate the ease of interacting with a business. In other words, CES measures the effort exerted by a customer to attain goods or services. If customers have to do plenty of work to purchase from a company, they will be likely to shop elsewhere. The idea of a survey is to help find out if customers have a hard time interacting with a brand, and take the necessary actions to streamline processes with the help of the survey data.
Net Promoter Score (NPS)
Net Promoter Score asks “How likely are you to recommend your business to a friend or family?” — a question called “likelihood to recommend” or LTR. It attempts to measure not only customer satisfaction but also customer loyalty. NPS is used by many companies to measure customer experience and predict business growth. It serves as an alternative to traditional customer satisfaction research and is believed to correlate with revenue growth . NPS has been widely adopted with more than two-thirds of Fortune 1000 companies using the metric. 
The company can also easily segment responses into three categories: detractors, passives, and promoters.
Promoters (9–10) are loyal customers who will keep buying from you and refer to others, helping your business grow.
Passives (7–8) are mostly satisfied but unenthusiastic and are potentially vulnerable to competitors.
Detractors (0–6) are unhappy customers who can actually damage your brand and growth through negative word-of-mouth.
Net Promoter Score is then calculated by subtracting the percent of “detractors” from the percent of “promoters” and is a number that ranges from -100 (worst case scenario: all responses are detractors) to +100 (best case scenario: all responses are promoters. 
Big companies like Apple, Amazon and Airbnb, among others, rely on this measurement tool to help them grow. There is a direct correlation between NPS, the likelihood of subscription renewal, and expansion of the business. NPS is an easy way for firms to receive feedback from customers and for customers to provide feedback. Over time, it has proved to be a valuable and credible system for customer-focused companies. NPS score and feedback are less likely to be affected by particular events. As a result, a business gets specific and meaningful feedback, with fewer outliers caused by recent positive or negative customer experience.
We have covered the most popular approaches, but who should fill out customer satisfaction surveys? Ideally, every customer that interacted with a business should fill in a questionnaire. It is also important through what channel survey is delivered, for example, email, in-product, website or app, among others. The frequency of delivery and the target audience within the customer base may vary from business to business.
The customer satisfaction metrics mentioned above are commonly used and straightforward, but they do not cover the full scope of customer satisfaction surveys. Depending on its goals, a company can also use longer surveys and more than one methodology.
There is also a range of customer satisfaction survey tools that companies can use to build their surveys, including «traditional» customer satisfaction survey methodologies. I am going to mention some of them.
Delighted uses the Net Promoter System methodology to gather feedback from customers as the main one. It is possible to set up a CSAT or CES survey, but the company suggests to work with NPS surveys. The channels of distribution are email, website, and SMS.
Wootric enables companies to track Net Promoter Score, Customer Satisfaction Score and Customer Effort Score metrics. Wootric runs surveys from the web and native iOS/Android mobile apps. Its greatest feature is so-called sentiment analytics, also known as Opinion Mining (a field within Natural Language Processing that builds systems that identify and extract opinions within the text) used for understanding customer comments.
Survey Monkey is probably the most well-known name in the realm of online survey tools. Survey Monkey gives access to an extensive library of a massive amount of pre-written questions, NPS, CSAT, CES surveys, and other survey templates. It includes open-ended and multiple-choice questions. You can send them via email, web and social networks. However, in my opinion, even though this service offers templates, most of them are long, not optimized or unreasoned from the user’s point of view.
Speaking about the NPS survey, let us imagine we received 1000 customer surveys and a score of 35. We had 50% promoters, 15% detractors and 25% passives. How can be passives turned into promoters? What can be done about the detractors? How can the score and the customer experience be improved?
Net Promoter Score is used to measure the loyalty of a company’s customer relationships. The best way to do so is to let customers express their opinion in their own words. NPS is used by many companies to measure customer experience and predict business growth. It is primarily built around one simple question - «How likely is it that you would recommend our company to a friend or colleague?» Many companies would follow-up with another question «Tell us a bit more about why you gave «9» as a score?; or «What can we do to improve?». Does one question alone give companies the insights they need? An open-ended question can be used to collect feedback on different topics without forcing the customer to explain what exactly they did not like about a product or service. A firm might, for example, get a response “I do not like your service”. What aspect of the service were they unhappy with? Due to such shortcomings, surveys cannot provide all the necessary deep customer insights that lead to action. Also, if the response rate is too low, survey results can misrepresent real feelings of the broader customer base. This could lead to companies drawing wrong conclusions about their business and features.
I tried to find some examples of what big companies that use or used NPS surveys think about the future of NPS. Data scientist Lisa Qian from Airbnb mentions «We find that higher NPS does, in general, correspond to more referrals and rebookings. But we find that controlling for other factors does not significantly improve our ability to predict if a guest will book on Airbnb again in the next year. Therefore, the business impact of increasing NPS scores may be less than what we would estimate from a naive analysis.» 
NPS is about learning and improving, and a number alone will not provide the information necessary for improving customer experience. One needs to go deeper to understand the reasons behind a score. One needs to have a conversation. A question that is being asked can vary depending on the customer’s previous choices. If they are unhappy with delivery, it should automatically route to relevant follow-up questions. Not only does it enable better analysis, but it also shows respondents that a company listens to their feedback.
In today’s highly competitive market, every touch point counts. Why should capturing feedback not be treated as a touch point with customers? Every organization seeks feedback, but standard web questionnaires do not provide good user experience, especially on mobile.
While working with the NPS platform Zenloop, I came up with an idea to improve the user experience of traditional surveys and make it possible for the user (company) to deploy conversational surveys with multiple questions. Conversational surveys allow users (companies) not only to collect customer satisfaction metrics but also ask customers how they feel and why they chose a particular score. In my thesis, I would like to build a prototype of the multiple question survey for a user. As an example, I will be providing the architecture of the survey for a bike sharing app on mobile. So found out all the ‘pain points‘ of bike sharing and built the prototype based on them. Afterwards I tested the Survey on different users and compared the results (effectiveness) of this survey with the results of a normal NPS Survey.
Is it possible to get more insights into the customer experience by using conversational surveys? Can a conversational survey become a solution to understanding customer needs? Can they become a more effective tool for measuring customer satisfaction?
Before speaking about conversational surveys, I would like to understand what conversational design and conversational interfaces mean.
Conversation is the most natural form of human interaction, but until recently, we were not able to use it to interact with computers. Over the last years, there has been a revolution in artificial intelligence. As a result, conversational interfaces reached another level, allowing people to speak with almost every device. Conversational interfaces are powerful because they do not need to be taught for users to understand them - we all know how to hold a conversation.
Conversation design is a design language based on human conversation, similar to how material design is a design language based on pen and paper. It is a combination of many different segments of the design industry, including voice user interface design, interaction design, visual design, motion design, audio design, and UX writing. 
Nowadays for almost every user, it’s absolutely natural to interact online with others. When making conversations with a computer, it is crucial to consider how words and format represent the business. It is important to build a conversation in a way that ensures the best experience for users. The process of conversation design must consider these and other challenges involved in the conversation flow.
I mentioned conversation design, now let us speak about conversation design for customer surveys and understand what a conversational survey is and how they can improve traditional surveys.
As I already mentioned in the problem statement, some companies are afflicted with falling response rates that can probably mean that customers do not want to talk. Often customers are using social media such as Twitter or Facebook to contact companies, so probably the problem is not that customers do not want to talk, that they maybe suffering from using outdated traditional survey methods. Too many surveys are long and complicated.
Research from Stanford University shows that the quality of data deteriorates because respondents need too much time to fill out a survey.  As for traditional surveys they do not always produce the results that brands are looking for. That is why conversational surveys can become a new generation of customer experience surveys.
Conversational surveys enable a conversation with customers about how they feel and why they gave a particular score. Giving feedback through conversation makes the experience more enjoyable, and increases the chances that customers will take the time to provide the information you need. There are different ways to build a conversational survey. Let us think of a survey as a one-sided conversation. One person asks questions, the other answers them. Great and smart surveys, similar to great and smart conversations, have a natural flow and feel. We can create a conversational survey manually, providing relevant questions and answers that a user can choose. Another way is to use conversational AI (chatbot) and real-time reactions to give the impression a bot understands feedback and listens. It can encourage each customer to deliver valuable insight into the different features of a product or service. Some chatbots use natural language processing (NLP) systems, but many simpler ones, scan for keywords within the input and pull a reply with the most matching keywords. Most of the process occurs manually. Today, most chatbots are accessed via virtual assistants such as Facebook Messenger, Google Assistant, and Amazon Alexa among others. 
Giving feedback through conversation makes the experience more enjoyable and increases the chances that customers will take the time to provide the information. From a company’s point of view, conversational surveys provide customer insight that helps understand what features user like or dislike and more importantly, why.
In the practical chapter of this thesis, «Practice», I research the bike sharing thematic by finding out the biggest user ‘pain points’ of bike sharing services, manually build a conversational survey, conduct user tests of this survey and compare with a traditional NPS survey and compare the results.
Before designing a customer survey for a bike-sharing company, I conducted detailed research on the topic. It included common problem areas of bike-sharing from the user perspective and the reasons for which users might not like their rides. To get started quickly, I organized brainstorming with peers who use bike-sharing daily. I raised the following question ‘What problem can you anticipate or have experienced before renting a bike; or during and after your ride’. After 30 minutes, I collected written responses from the group and combined user problems in five categories: 1) bike 2) external conditions, 3) app, 4) service, 5) branding & emotions.
The brainstorming proved to be very productive. I gained a comprehensive overview of bike- sharing and its problem areas. While interviewing people from two different cities - Berlin and Moscow - it became clear that bike sharing issues differ in each country. In Berlin, for example, one does not need to park a bike at a special station - it can be left within a parking zone area, such as Ring Bahn. In Moscow, however, a bike must be returned to a parking station. A problem emerges when Muscovites cycle to work during rush hour, and cannot find a free spot at the station to park. Berliners, on the other hand, argue that parking zones are too small. Another pain point of bike sharing that I found out is that people in Berlin are disgruntled a lot by sharing bikes, mostly Mobike, that are just laying on the streets, broken and not used anymore. There was even such an assumption that it is a marketing strategy to make the company catchy. The group also mentioned the lack of cycling lanes in Russia, which is one of the main issues.
Different countries have different problems, and this brainstorming workshop inspired me to start the design process of my own customer survey.
My work focuses on mobile surveys; thus, all wireframes are for mobile devices. First, I created wireframes for testing purposes. I compiled a list of screens to cover all possible scenarios, namely problem arias of a bike-sharing service discovered at the workshop. I began with grayscale wireframes to detail out the flows. I also made a low fidelity prototype to test the idea with users and fix potential problems at an early stage. The main purpose of these wireframes was to create multiple flows and explore different ideas. Before moving on to high fidelity visual designs and implementation, I created five prototypes to understand how the new solutions work. These prototypes were used to run the first round of user tests. I decided to run the tests by users, who cycle daily and are familiar with the NPS and other customer surveys so that I can check which flow has better performance. In some of the flows, I use a scale from 0 to 10 as NPS, in some scales from 0 to 5, as well as stars. The overall goal was to understand what works better for users.
To measure the success of these improvements I tracked:
At this stage of the design process, early user feedback was crucial to eliminate drawbacks and pick the best flow for my survey.
This work not only aims to build a prototype of a bike-sharing survey and test it but also to demonstrate how bike-sharing companies or any other company can create similar surveys. I want to create a prototype of a service that specializes in building different kinds of surveys for any sort of business.
When I first thought about a name and a logo for my service I realized that I want choice options to have a form of bubbles of different sizes that one can click on. I did not want to develop a boring survey design, but a playful and an attractive one so that it catches the interest and is fun to complete. Ultimately I came up with Bubble. In preparation for a logo design, I researched the design trends of 2019.  While working on Bubble, I applied ‘Bold Colors’, ‘Gradients’, ‘Geometrical & Asymmetrical Shapes’ and ‘Big.Bold.Better’ fonts.
Based on the user tests and feedback from the first round, I moved on to a high fidelity prototype for the survey. My design choices were based on simplicity and design trends 2019. tried to create an emotional impact with the bubbles as well as used the shape of the logotype as an overlapping element. When the high-fidelity prototype was completed, I converted it into an interactive prototype using Invision software. The screen with the comment is programmed and also linked to the Invision prototype so that I get the answers of the users directly through email. This prototype was used to run the second round of user tests.
I would also like to explain my idea behind the bubbles. A reader typically scans a vertical line down the left side of the text while looking for keywords or points of interest (F-Shaped Pattern).  For every respondent, the position of the answer is different, and the bubbles can be bigger or smaller. The interface should present content in a way that matches how users prioritize information. Random bubble allocation makes all answers equivalent. Bubbles also become animated per click and disappear smoothly, becoming smaller before you get to the next screen.
As previously mentioned, I wanted to test my survey on different users and compare their insights with the results of traditional NPS testing. Would I receive more insights from it? Would it help to understand customer needs better than with an NPS survey? For the user tests, I also designed a traditional NPS survey, based on familiarity and simplicity.
Before starting with the user tests I made a small research about the best practices for it. First of all, it was important for me to understand how many users are needed for a successful test. There are two types of metrics to measure the interface - qualitative and quantitative. The quantitative metric is used to quantify the problem by generating numerical data. Qualitative research relies on the observation and collection of non-numerical insights, such as opinions and motivations. For a quantitative study, it is recommended to test with twenty users, whereas the maximum number for a qualitative test, is five.    In my work I decided in favour of a quantitative study and tested each survey on twenty respondents. In addition, I measured satisfaction and behavioural responses.
For my tests, I needed to find respondents who use bike-sharing. The survey should appear after the user locks his bike, so I came up with the idea to spend some time and ask people that were returning their bikes in the city center of Berlin. Before raising any questions, I explained my motivation to the respondents so that they would engage. I also conducted some user tests remotely, with the help of my Russian friends, who submitted surveys after returning their shared bikes. It enabled me to collect opinions from respondents from different countries. The task was formulated as follows. ‘You are returning your bike. Please fill in a survey about your ride. Feel free to answer the questions you want. If you do not wish to answer any questions, please close the survey’. Ultimately, I ran the test on 32 users, eight of which participated in both surveys. Upon the completion of the survey, participants were also asked about their visual satisfaction with the questionnaire, whether they enjoyed its simplicity. Another question was if users would complete the survey in the real world if it appeared in the app after returning a bike; and how often it should appear before it starts to annoy them.
In general, the results of the tests were very helpful and provided plenty of input. First, let us see the results of the traditional NPS survey.
As we can see from the graph, 35% (15+20) of the respondents are detractors, and 65% (35+30) are passives. Thus the NPS of -35 is negative. Only four people answered the follow-up questions. All of them were detractors, and the answers were: «oldfashioned bike», «poorly branded», «I don’t like it», «I use it only because it’s free for 30 minutes».
Now let us see the results of the survey that I built.
The graph demonstrates that users experienced the most issues with bicycles themselves. From this input, I can draw definite conclusions on what users disliked about the bikes, the app, service and check-in and check-out. For example, I know that all Mobike users were dissatisfied because the bike was too heavy and too small for them. Another interesting fact is that all four users who participated remotely had problems with parking slots or transactions.
The results also have more than twenty answers as the survey gives the opportunity to choose more than one answer. Participants enjoyed the simplicity of the survey; as well the idea of choosing the bubbles as answers. They found it intuitive, clean and easy to interact with. But testing the prototype with the first seven users detected some pain points:
The “Submit” button on the first screen is superfluous and requires an extra click, which can be avoided.
All seven users disliked the purple colour.
Therefore I took the button out and changed the colours of the prototype for the next participants. As a result, the subsequent feedback was positive.
All participants of this survey stated that if they were dissatisfied with something, they would go through the survey because it was quick and straightforward. But they would also appreciate it if it did not appear every single time but appear only when they return the bike every 5 to 10 rides.
It is impressive how much better the insights of a manually built survey were if compared to a traditional NPS survey. Each dissatisfied user made it clear with their answer exactly what they did not like about the ride. Two users, although they rated the service with five stars, left the following feedback. «I love the idea of discovering and exploring new cities by bike. Moreover, it’s great for the environment!». «Good bike and easy experience with the app». Nevertheless, my impression is that people who give a service five to four stars do not want to leave any feedback. Unsatisfied customers are ready to go through a short survey in order to let the company know what the problem was.
I already mentioned that I want to build a prototype of a service that specializes in creating a different kind of surveys, that I named «Bubble».
To align the new service and key functionalities with user needs, I created a user persona. While developing it, I referred to possible consumers and tried to understand their job responsibilities and goals.
Marketing analyst at Mobike for two years.
Performs analysis with the use of sales and marketing data to report on customer metrics and KPIs. Assists with customer segmentation, and reports across multiple marketing channels. Provides analysis for Product, IT and Customer Service teams.
To provide best reports for the product team with the help of customer feedback data analysis to improve the service.
As the next step, the user persona was used to create a user flow. Creating UX flow enabled me to understand the user journey and to cover all the screens. For creating the user flow, I used an online tool called FlowMapp.
I usually start the design process with low fidelity sketches. This is the way I explore more technical aspects of the design. I sketched a draft of the service with elements and screens that were necessary for users’ goals on paper to see if the idea works. I concentrated only on the desktop version of the service.
After completing the sketches, I finalized the visual design and uploaded the prototype in Invision to show the interaction between different pages, and how different features of the service, such as adding or editing a question, will work.
I mostly concentrated on the flow, in which a user creates a new survey, adds questions and sets up the design. From the design, it is clear that the service provides industry-specific survey templates. It shows that the user can choose a bike-sharing survey, the design of which I provided and tested.
In addition, the service has a dashboard. There you can select a survey you created and shared previously; you can also see the response analytics. On the left side menu, you can find Plans and Billings, where you can choose a plan and pay for the service; Account settings, Help Center and Integrations, where you can manage integrations with different CRM systems - Salesforce, PipelineDeals, and others.
Since the test revealed that the colors needed a change, I replaced the Bubble colors. You can see the difference in the subsequent pictures.
Traditional online surveys are losing relevance, but it does not mean that customer feedback is no longer relevant. By providing an example of a manually created conversational survey for a bike-sharing company, I proved that it is possible to have a more engaging and personalized survey experience, which guarantees higher completion rates and better insight into the user needs. Conversational surveys can be used to track and improve customer satisfaction by measuring customer emotion. They allow us to go beyond the metrics and gain a deeper understanding of user responses.
I concur the fact that it might not be useful for big companies, where responses can reach several thousand per day, to conduct conversational surveys. Big companies understand their customers feedback by using sentiment analysis of the comments. Nevertheless, I believe that small companies will have a better understanding of their business issues, and customer needs with the use of conversational surveys.
In the course of this work, I dived deeper into the understanding of the business perspective and worked out how to create a powerful tool to measure customer satisfaction. I learned that creating something new or different is exciting, especially from a designer’s point of view, but continuous testing is crucial for success.
This work demonstrates how to create a conversational survey manually, providing relevant questions and answers for a user to choose. Another way of providing conversational survey involves chatbots.
Chatbots should make it possible to create a virtual interviewer and provide conversational surveys that offer better user experience. Respondents will be able to answer questions on their favourite platforms, such as Facebook messenger. Combined with the power of artificial intelligence, it offers the next level of real-time understanding and routing and creates a better user experience.
This work can be continued by building a survey powered by an AI chatbot. It should be tested when finished as there is a big possibility that users will not like a conversation with a machine. The onus is on the designer to build an AI chatbot that provides human-centric experience and reacts to the conversation in a way that only a human can.
 Grzegorz Biesok, Jolanta Wyród-Wróbel, «Customer satisfaction — Meaning and methods of measuring», https://www.researchgate.net/publication/318013354_Customer_satisfaction_Meaning_and_methods_of_measuring
 Alouk Kulkarni, The Australian, Business Review, «Why NPS is dead» (2017), https://www.theaustralian.com.au/business/technology/why-the-nps-is-dead/news-story/8576353e79164b36bd10b1dd212c718e
 Farris, Paul W.; Neil T. Bendle; Phillip E. Pfeifer; David J. Reibstein; Marketing Metrics: The Definitive Guide to Measuring Marketing Performance. (2007) Upper Saddle River, New Jersey: Pearson Education, Inc. ISBN 0-13-705829-2.
 H. James Harrington, in CIO (Sep 1999), p. 19.
 Kristin Smaby, Business, Issue #334, «Being Human is Good Business», (2011), https://alistapart.com/article/being-human-is-good-business/
 Esteban Kolsky, Business, «Customer Experience for Executives» (2015), https://www.slideshare.net/ekolsky/cx-for-executives
 Wayne Huang, John Mitchell, Carmel Dibner, Andrea Ruttenberg, Audrey Tripp, Harvard Business Review, «How Customer Service Can Turn Angry Customers into Loyal Ones», (2018), https://hbr.org/2018/01/how-customer-service-can-turn-angry-customers-into-loyal-ones
 Call Centers for Dummies, By Real Bergevin, Afshan Kinder, Winston Siegel, Bruce Simpson, p.345
 Jennifer Kaplan. «The Inventor of Customer Satisfaction Surveys Is Sick of Them, Too», Bloomberg.com. Retrieved 2016.
 Fred Reichheld, Rob Markey, «The Ultimate Question 2.0: How Net Promoter Companies Thrive in a Customer-Driven World», (2011).
 Lisa Qian, «How well does NPS predict rebooking?», (2015). https://medium.com/airbnb-engineering/how-well-does-nps-predict-rebooking-9c84641a79a7
 Turing, Alan, «Computing Machinery and Intelligence», (October 1950)
 Weizenbaum, Joseph. «Computer Power and Human Reason: From Judgment to Calculation.», (1976), pp. 2, 3, 6, 182, 189. ISBN 0-7167-0464-1.
 «The future is AI, and Google just showed Apple how it’s done», (2016)
 Kristen Backor, Saar Golde, Norman Nie. «Estimating Survey Fatigue in Time Use Study» (2017), http://www.atususers.umd.edu/wip2/papers_i2007/Backor.pdf
 Orf, Darren. «Google Assistant Is a Mega AI Bot That Wants To Be Absoutely Everywhere», (2016)
 Loredana Papp-Dinea, Mihai Baldean, «2019 Design Trends Guide»,(2019), https://www.behance.net/gallery/71481981/2019-Design-Trends-Guide
 Jakob Nielsen, «F-Shaped Pattern For Reading Web Content», (2006), https://www.nngroup.com/articles/f-shaped-pattern-reading-web-content-discovered/
 Jakob Nielsen, «Quantitative Studies: How Many Users to Test?»,(2006), https://www.nngroup.com/articles/quantitative-studies-how-many-users/
 Louise Barkhuus, Jennifer Ann Rode, «From Mice to Men - 24 Years of Evaluation in CHI», (2007)
 Steve Krug, «Don’t Make Me Think», (2000)
I would like to thank Prof. Constanze Langer and Prof. Reto Wettach for their feedback.
Also I would like to say thank you to my parents, Shilpa and Katia for advice and all friends for support and participation.