The experience of personalizing an online store using the example of a dynamic recommendation

Hello, Habr!



I will share my experience on how we put together our own personalization system based on ā€œknowledgeā€ about a potential buyer.



image



The only difference between our solution and the classical ones is the use of a combined bundle of a number of solutions and satisfies the list of requirements:





This is more detailed :) And about the rake that helped us change the stack for the better.



Background



There is a group of sites of the same subject, whose audiences are similar - sites of one holding. The content is different, as a rule, information on products manufactured by the holding company is published on each site. In addition to these "content" sites, there is also its own online store in which these products are sold.



Each site has its own marketing team, its own GA and Ya Metrics counters. Analyzing their own audience, fellow marketers customized the content of the site, taking into account the needs of visitors. Naturally, the audiences of these sites intersected, but since at that time there was no single counter, the results of the analysis were local.



Any analyst will make the right decision, be at hand with more data. For the sake of this good purpose, he developed his own version of the counter.



Why not Google Analytics?



Sampling, the lack of the ability to pull everything about the user with his chain of moves from the external site, through a bunch of our sites, with details that he watched, etc. Yes, in terms of GA tasks, itā€™s a good tool, but when you want to get data in real time and immediately decide what content to show the visitor, then ... the analytics giant does not have such a solution. Dancing with tambourines of client ID for transfer between sites is not our option.

Setting a single counter for all sites is not a completely correct ā€œpoliticalā€ decision, and they would immediately fall into the restriction of the free version.



I must say right away that I did not begin to reinvent the wheel and limited myself to a modest functional whose tasks were:



  1. Fix each unique user, regardless of what site the group is located on. This was necessary to create a single client profile in which all his data would be written with reference to each site.
  2. Tracking a long chain of transitions between all sites of a group within a session. This was necessary to identify: the interests of users; what they watched; what they bought; what was put in the basket but not bought; what products of different sites could be ā€œsubstitute productsā€ in the buying process, etc.
  3. Tracking the advertising activities of all marketers (in each team of the site) for subsequent analysis. This was necessary for: enriching the profile of each visitor; identification of optimal advertising campaigns tied to the product; identification of effective advertising channels with reference to a product or campaign, etc. the list is very long.


All data was poured in real time into the local collection. It turned out not very bad. In the future, it was possible to make any aggregate reports both by product and by: audience, advertising campaigns, traffic sources and anything that comes to mind. For each unit of goods there were data on prices, quantity in stock, discounts, promotions and even a sea of ā€‹ā€‹data.



Praise the situation, Iā€™m an analyst + developer + marketer + manager + I had access to everything that was digitized in the holding. I didnā€™t have a technical task at the start, I did it for myself to solve ordinary data analysis tasks.



Technical moment:





While everything, like everyone else ... But then it began


Given the fact that the knowledge gained about buyers and products has accumulated enough, we decided to create our own recommendation system for the online store.



Everything turned out great. We analyzed everything, but without fanaticism. The model increased in size and, as it usually happens, the time came when I wanted more.



Technical moment 2:





A few months later, the quality of the API has crossed all sane boundaries of "quality." For some users, the response speed exceeded the mark of 400ms. Content blocks gathered slowly, the site began to noticeably dull. MongoDB collections totaled tens of millions of records ...



It's time to change something



Almost all instruments were logged at the level of operations, each sneeze was measured.



What changed for what:





Why not metric API?



At first, I was just looking towards ready-made solutions from Yandex, but it was not long. This is good when one site is in operation. and when there are n, and you want to process the data right away, then thereā€™s no time for dancing with tambourines.



Why MongoDB?



Product specifications were constantly added, some of them, alas, were not always presented.

Using aggregated queries - very well fit into the format of the local technological style of the team. Classical SQL did not want to produce tables.



Quite often, the types and variants of data that were stored and processed were modified.

At first, I thought that I would use Yandex clickhouse as the basis for the service, but then I abandoned this idea, but clickhouse was also in our stack holder.



A new time has come, 2000 requests per second ... provided that in a week roll out new functionality and the load will increase even more.



During machine learning, for the first time in htop I saw 100% load of 12 cores at once and a full swap on a productive server. Zabbix actively informed that MongoDB had already changed masters in the replica twice in 10 minutes. Everyone wanted stability and a predictable state.



It's time to change something 2.0



The number of users has increased. The number of customers is similar. For everyone who has ever been to any of the sites, we have accumulated a personal profile. The audience of regular visitors was partially formed.



What did you know how to do? Yes, any non-standard report for analytics + content diversity:





In fact, it was not in the format of "abnormal programming", I wanted something else. Another came to us. At the time of advertising campaigns, the online store was bent over from the load, what can we say about our API-shki, which caught this load ā€œstanding next to itā€.



Decisions made on time



We conducted an analysis of the audience, decided to collect content not for everyone, but for groups of visitors. The whole crowd was clustered. Each cluster had its own characteristics and ā€œtasteā€ for shopping. Clustering is done every day. This is necessary so that at the time of the next visit the user will show exactly the content that corresponds to him the most.



From session to session, the interests and needs of the client change and if the last time he was automatically assigned to cluster No. 788897, then given his current interests, the system can transfer it to cluster No. 9464, which will more effectively affect the subsequent conversion.



After the daily clustering procedure, the next stage was launched - model training, taking into account new data and knowledge about customers and taking into account goods that appeared on store shelves or left them forever.



For each cluster, we formed content blocks in advance and recorded them in memory. Then Tarantool came on the scene in all its glory. Previously, we used it to store fast data, which was then used in machine learning. This was the best solution, so as not to rattle MongoDB which was already busy with other tasks. In space Tarantool stored data on goods, user data (necessary knowledge about the buyer).



Roughly speaking, tonight we ā€œpreparedā€ content for each cluster of audiences that might visit the site tomorrow. The user came in, we quickly determined if we knew anything about him, and if the answer was yes, the necessary content package would fly to Nginx. Separately, for NoName users, a default cluster was assembled with its content.



Postprocessing for personalization



We knew that there are simple users, and there are those for whom we have a whole file of knowledge. All this stuff was in Tarantool and updated in real time.



At the time of page assembly, we knew the entire history of purchases and abandoned baskets of each visitor (if he was previously our client), determined his cluster affiliation, the clickstream module gave knowledge about the source of traffic, we knew about his past and immediate interests. Building a TOP50 array on the fly from previously viewed products (on any of the groupā€™s sites), we ā€œdetermined the fashionā€ and mixed in the ā€œtasteā€ content of those products, the subjects of which are most often phylum in the TOP50. This simple analysis of the last viewed products gave a real profit. Cluster content, we enriched personal.



Result of our experience



  1. We got the acceleration of the process of creating personal content N times
  2. Reduced server load by 15 times
  3. We can create almost any content personally, taking into account the many Wishlist marketers, features of the product presentation and many exceptions, taking into account the data from the user profile and events that are happening on the site at the moment - and all this for ~ 25ms
  4. The conversion for such blocks does not fall below 5.6% - users are willing to buy what is closer to their needs right now
  5. The page loading speed is ideal - they removed the content that was "past" the cluster by> 67%, which is correct
  6. We got a tool that, according to the tasks of marketers, not only gives the answer ā€œwhat happened earlierā€, but also helps to formulate content in the near future fragmented, taking into account the interests and needs of potential buyers
  7. Information from DMP was added to the profile of each buyer, now we can do clustering, including by social network, interests, income level and other sweets
  8. Our recommendation service has become better, more orders in the store


What good is this?



We gained new experience, tested a number of hypotheses that we did not know how to approach before, we do not use third-party paid solutions for a recommendation service that did not take into account all our features and worked on the same domain. The team received a good and interesting case, which they successfully managed.



Appetites are growing ... now we are testing a new logic of content assembly. We want to collect pages for advertising campaigns, newsletters and other external activity. The plans include transferring the config assembly logic from the manul mode to machine learning. Sites will live their lives, and we, aside from ā€œpopcorn,ā€ will observe the evolution of the presentation of site content based on the opinion of AI.



All Articles