Usually, to get of all site pages, simple enter any of its page in "Site" field and click on "Get site pages" button.
Read the next section, if you can not get the site pages for some reason.
How service works
In most cases, each site has a file, that consists of all inner links and called Sitemap. As a rule, it is located at [site]/sitemap.xml (ex.:). Using this file this service extract all inner page links.
In normally, path to file is specified in [site]/robots.txt in
Sitemap section, for example
User-agent: * Host: https://vivazzi.ru Sitemap: https://vivazzi.ru/sitemap.xml
In rare cases, site developers can use another location for the Sitemap. In this case, the service will try to find the file specified in the robots.txt. If robots.txt is not available or sitemap file specified in robots.txt does not exists, the service can not display site pages, since service does not automatically crawl pages from site's links, as search engines (Google, Yandex and so on) or spider programs (, and so on).
If you did not get the site page, try using different spiders, but it's probably hard for an ordinary user to understand.
There is also a way to get all the links of the site through the Google or Yandex search engine by typing in the address bar a query:
site: command on page:
But this method has a disadvantage: displays only those pages that are included in the search , and the remaining pages will be ignored if they did not enter the search (not indexed) for some reason.
Also you can find all the links on the page using different services. For example:- will show internal and external links on the page. This service will be of little use if you want to get all the links of the site, because link_extractor does not crawl through all the links on the site.