About the service

Below we have posted answers to the most frequently asked questions about the service and how to use it for your business. Our service helps companies to find new business partners, for this purpose we analyze contact information of only legal entities (companies) that promote services on their sites on the Internet.

Frequently asked questions

What's it for?

This is a service that allows you to quickly search among 250 million sites on the Internet and automatically collect contact and other information from them. Our main task is to make a convenient mechanism for finding customers for your business all over the world.

For example, you can find all medical clinics, upload their contact information in Excel format and further work with them in your CRM-system. You can also search for websites that work on certain technologies. For example, you can make a selection of sites of all dental clinics that work on the basis of WordPress.

How do I use it?

Working with our service is very easy. Type in the search bar query, for example, "Internet store", specify where to look (in the title of sites, description, etc.) and the system will quickly find suitable sites for your request.

As a rule, all companies that have sites, write the type of their activities in the titles, so the final output will be very relevant. You can make complex, compound, queries that will refine the result.

What data do you collect?

We automatically (and continuously) search for contact information, including e-mail, phone numbers, TIN, PSRN, etc. on all relevant pages of websites. We verify all collected information (e.g., check corporate email addresses, check TIN checksums, etc.) and further enrich it from different sources.

For example, knowing the TIN we can collect extended information about the company that owns the site: CEO, founders, revenue, etc. Please note that in accordance with the current legislation of the Russian Federation, a phone number is not personal information without an indication of other identification data of its owner. Our service collects contact information only of legal entities (organizations or sole proprietorships), placed in free access on the Internet.

E-mail is also an object of personal data only in conjunction with other personal data that allows us to identify the person to whom the e-mail belongs. We only look for e-mail in the contact sections of websites, and therefore e-mail is not an object of personal data.

Is the data up to date?

Yes, our bots work 24/7, constantly updating information. Of course, no one is immune to errors, but we are constantly improving our algorithms to remove "garbage" from search results as much as possible.

Is it legal?

Yes. We don't hack into websites, we don't pick passwords, etc. Our robots act in full analogy with Yandex or Google systems - collecting and analyzing publicly available information that is published on company websites. If you believe that the collected information should be deleted, please use the feedback form on the Contact Us page.

Data sources

In the work of our service we use only open data from various sources (sites) on the Internet.

What is open data? This is information that the owners of sites (and other resources) themselves have published in open access to all comers and allowed search robots (for example, Google) to collect, store and process this information.

Our service automatically collects information from various public sources and structures it for easy uploading in Excel format. Technically, our service does not differ from popular search engines, such as Google or Yandex, but allows you to upload search results in Excel format.

Personal data

We do not search for or process people's personal data. It is worth recognizing that our robots may collect them if site owners have deliberately listed their personal data as contact details on the relevant pages of their site. We do not hack into websites, nor do we pick passwords.

If you have published your personal data and opened your site to indexing, search engines such as Google/Yandex will also add your personal data to the index. If you have found your personal data and do not agree with its presence, please indicate that your site should be removed from our search index.

We don't hack websites

We are not hackers. We don't hack sites, we don't pick passwords, we don't search for vulnerabilities, we don't create DDOS-attacks, we don't interfere with the work of sites etc. All we do is to structure publicly available (public) information with the ability to upload it in Microsoft Excel format for convenient work.

If a site on the Internet is open to indexing for search engines, then only in this case our robots will collect publicly available information on it.

How did you get the data?

Our service searches for open data on all domains in the world and publishes aggregated publicly available information. If your site is allowed to be indexed by search robots and you have published publicly available contact information, our robots will automatically collect, structure and neatly display this data on our site for the convenience of potential customers.

How will the service help me?

We summarize all useful information found about your site (company) on one page and make it easy for your potential customers to interact with you. Your site can also be displayed in the "similar" list when a user is viewing information about another site. This allows you to find new customers for your products or services.

How do I change the site data?

Our parsers regularly collect all public data from your website automatically. So if you change it on your site, after a while the data will change on ours as well.

How do I delete the information?

This is very easy to do. You need to add the following two lines of text to your robots.txt file:

User-agent: tapki.com

Disallow: /

The robots.txt file of your site (and any site in general) can be viewed in your browser at the link www.site.ru/robots.txt. It is not a mandatory element of the site, but its presence is desirable, because with its help site owners manage search robots.

The robots.txt file tells search robots Yandex/Google, etc., which pages on your site can be processed. Use it to limit the number of requests your site receives and thus reduce the load on your site.

After adding the above lines to the robots.txt file, our bots will automatically detect them at the next update and your site information will be unavailable. Usually it happens within ten days.

Basis of publication?

By creating a website and opening it for indexing to search engines (Yandex, Google, Bing, etc.) you make your resource publicly available. We do not break the law, do not hack sites, do not pick up passwords, do not interfere with the work of the site, etc. Our robots very delicately collect open information, structure it and display it in a user-friendly form. In other words, once a site is created, unless you explicitly specify otherwise, it is allowed to be indexed by default.

Why do you need HTML search?

With HTML code search you can search for sites where certain services work. For example, you can find all sites where the code of the online chat you are interested in is installed. Or create a selection of sites created with a particular CMS-system. Moreover, you can combine search queries and find all dental clinics that have the analytics system you need.

Does the service interfere with websites?

No. Our search robots do not create any "parasitic" load, because we analyze literally 3-4 pages on your site and do it no more than once a month! Believe us, you simply will not notice it!

How many sites are in the database?

At the moment we analyze about 250 million domain names in all popular domain zones. Every day this number increases by approximately 100,000 domains.

Inappropriate content

We do not index banned sites, but if you find a site that contains inappropriate content, please email us via the feedback form and we will remove it from our database immediately.