How it has been done
After the capture of the first 100 results for each query in Google.com and the selection of 25 of them removing invalid urls, the text cointained in fine results has been extracted with dev.zup.densitydesign.org. A corpus of 200 text documents for each query was composed and analyzed with dev.sven.densitydesign.org to extrapolate the relevance of words contained in it. We added a category (perceptions, items, verbs, actors involved, technology and ambients) to the first 150 words emerged by the analysis and we created a spreadsheet with terms, Tf_idf value and category. We visualized with an alluvial diagram made in raw.densitydesign.org the amount of words of each category for each corpus.
For the first 25 terms of each list we used raw.densitydesign.org to show the proportion between them and the category to which they belong by a Circle Packing diagram. We replaced circles with words of the same size.