🕷 Automatically detect changes made to the official Telegram sites, clients and servers. https://t.me/tgcrawl
Find a file
2021-04-25 17:36:33 +02:00
.github action does not trigger another on push action (https://github.community/t/push-from-action-even-with-pat-does-not-trigger-action/17622/5) 2021-04-25 17:36:33 +02:00
.gitignore enable workflow 2021-04-24 14:28:06 +02:00
LICENSE add readme and license file 2021-04-24 17:34:55 +02:00
make_and_send_alert.py move send alert from step to workflow 2021-04-25 17:06:05 +02:00
make_files_tree.py sub passport ssid and nonce 2021-04-24 16:42:51 +02:00
make_tracked_links_list.py test alert system 2021-04-25 16:37:52 +02:00
README.md add alerts to telegram channel 2021-04-25 16:35:16 +02:00
requirements.txt move tracked links list to main branch; 2021-04-24 14:19:01 +02:00
tracked_links.txt Update tracked links 2021-04-25 14:38:32 +00:00

🕷 Telegram Web Crawler

This project is developed to automatically detect changes made to the official Telegram sites. This is necessary for anticipating future updates and other things (new vacancies, API updates, etc).

Name Commits Status
Site updates tracker Commits Fetch new content of tracked links to files
Site links tracker Commits Generate or update list of tracked links
  • passing new changes
  • failing no changes

You should to subscribe to channel with alerts to stay updated or watch (enable notifications) this repository with "All Activity" setting. Copy of Telegram websites stored here.

GitHub pretty diff

How it works

  1. Link crawling runs once an hour. Starts crawling from the home page of the site. Detects relative and absolute sub links and recursively repeats the operation. Writes a list of unique links for future content comparison. Additionally, there is the ability to add links by hand to help the script find more hidden (links to which no one refers) links. To manage exceptions, there is a system of rules for the link crawler.

  2. Content crawling is launched as often as possible and uses the existing list of links collected in step 1. Going through the base it gets contains and builds a system of subfolders and files. Removes all dynamic content from files.

  3. Using of GitHub Actions. Works without own servers. You can just fork this repository and own tracker system by yourself. Workflows launch scripts and commit changes. All file changes are tracked by the GIT and beautifully displayed on the GitHub. GitHub Actions should be built correctly only if there are changes on the Telegram website. Otherwise, the workflow should fail. If build was successful, we can send notifications to Telegram channel and so on.

FAQ

Q: How many is "as often as possible"?

A: TLTR: content update action runs every ~10 minutes. More info:

TODO list

  • add storing history of content using hashes;
  • add storing hashes of image, svg, video.
CRAWL_RULES = {
    # every rule is regex
    # empty string means match any url
    # allow rules with higher priority than deny
    'translations.telegram.org': {
        'allow': {
            r'^[^/]*$',  # root
            r'org/[^/]*/$',  # 1 lvl sub
            r'/en/[a-z_]+/$'  # 1 lvl after /en/
        },
        'deny': {
            '',  # all
        }
    },
    'bugs.telegram.org': {
        'deny': {
            '',    # deny all sub domain
        },
    },
}

Current hidden urls list

HIDDEN_URLS = {
    # 'corefork.telegram.org', # disabled

    'telegram.org/privacy/gmailbot',
    'telegram.org/tos',
    'telegram.org/tour',
    'telegram.org/evolution',

    'desktop.telegram.org/changelog',
}

License

Licensed under the MIT License.