2021-04-25 14:13:44 +02:00
## 🕷 Telegram Web Crawler
2021-04-24 17:34:46 +02:00
This project is developed to automatically detect changes made
to the official Telegram sites. This is necessary for anticipating
2021-04-24 17:47:57 +02:00
future updates and other things (new vacancies, API updates, etc).
2021-04-24 17:34:46 +02:00
2021-04-24 17:44:14 +02:00
| Name | Commits | Status |
| -----| -------- | ------ |
| Site updates tracker| [Commits ](https://github.com/MarshalX/telegram-crawler/commits/data ) | ![Fetch new content of tracked links to files ](https://github.com/MarshalX/telegram-crawler/actions/workflows/make_files_tree.yml/badge.svg?branch=main ) |
| Site links tracker | [Commits ](https://github.com/MarshalX/telegram-crawler/commits/main/tracked_links.txt ) | ![Generate or update list of tracked links ](https://github.com/MarshalX/telegram-crawler/actions/workflows/make_tracked_links_list.yml/badge.svg?branch=main ) |
2021-04-25 14:13:44 +02:00
* ✅ passing – new changes
* ❌ failing – no changes
2021-04-24 17:34:46 +02:00
2021-04-26 12:27:28 +02:00
You should to subscribe to ** [channel with alerts ](https://t.me/tgcrawl )** to stay updated.
2021-04-24 17:47:57 +02:00
Copy of Telegram websites stored ** [here ](https://github.com/MarshalX/telegram-crawler/tree/data/data )**.
2021-04-24 17:51:46 +02:00
![GitHub pretty diff ](https://i.imgur.com/BK8UAju.png )
2021-04-25 14:13:44 +02:00
### How it works
2021-04-24 17:34:46 +02:00
2021-04-26 22:27:57 +02:00
1. [Link crawling ](make_tracked_links_list.py ) runs **as often as possible** .
2021-04-24 17:34:46 +02:00
Starts crawling from the home page of the site.
Detects relative and absolute sub links and recursively repeats the operation.
Writes a list of unique links for future content comparison.
Additionally, there is the ability to add links by hand to help the script
2021-04-25 14:13:44 +02:00
find more hidden (links to which no one refers) links. To manage exceptions,
2021-04-25 14:35:37 +02:00
there is a [system of rules ](#example-of-link-crawler-rules-configuration )
2021-04-25 14:13:44 +02:00
for the link crawler.
2021-04-24 17:34:46 +02:00
2021-04-25 14:13:44 +02:00
2. [Content crawling ](make_files_tree.py ) is launched **as often as
possible** and uses the existing list of links collected in step 1.
2021-04-24 17:34:46 +02:00
Going through the base it gets contains and builds a system of subfolders
and files. Removes all dynamic content from files.
2021-04-25 14:13:44 +02:00
3. Using of [GitHub Actions ](.github/workflows/ ). Works without own servers.
You can just fork this repository and own tracker system by yourself.
Workflows launch scripts and commit changes. All file changes are tracked
by the GIT and beautifully displayed on the GitHub. GitHub Actions
should be built correctly only if there are changes on the Telegram website.
Otherwise, the workflow should fail. If build was successful, we can
send notifications to Telegram channel and so on.
2021-04-24 17:34:46 +02:00
2021-04-25 14:13:44 +02:00
### FAQ
2021-04-24 17:34:46 +02:00
2021-04-26 22:27:57 +02:00
**Q:** How often is "**as often as possible**"?
2021-04-25 14:13:44 +02:00
**A:** TLTR: content update action runs every ~10 minutes. More info:
2021-04-24 17:34:46 +02:00
- [Scheduled actions cannot be run more than once every 5 minutes. ](https://github.blog/changelog/2019-11-01-github-actions-scheduled-jobs-maximum-frequency-is-changing/ )
2021-04-25 14:13:44 +02:00
- [GitHub Actions workflow not triggering at scheduled time ](https://upptime.js.org/blog/2021/01/22/github-actions-schedule-not-working/ ).
2021-04-26 22:27:57 +02:00
**Q:** Why there is 2 separated crawl scripts instead of one?
**A:** Because the previous idea was to update tracked links once at hour.
It was so comfortably to use separated scripts and workflows.
After Telegram 7.7 update, I realised that find new blog posts so slowly is bad idea.
**Q:** Why alert for sending alerts have while loop?
**A:** Because GitHub API doesn't return information about commit immediately
after push to repository. Therefore, script are waiting for information to appear...
**Q:** Why are you using GitHab Personal Access Token in action/checkout workflow`s step?
**A:** To have ability to trigger other workflows by on push trigger. More info:
- [Action does not trigger another on push tag action ](https://github.community/t/action-does-not-trigger-another-on-push-tag-action/17148 )
**Q:** Why are you using GitHab PAT in [make_and_send_alert.py ](make_and_send_alert.py )?
**A:** To increase limits of GitHub API.
2021-04-24 17:34:46 +02:00
### TODO list
2021-04-25 14:13:44 +02:00
- add storing history of content using hashes;
2021-04-24 22:40:25 +02:00
- add storing hashes of image, svg, video.
2021-04-24 17:34:46 +02:00
2021-04-24 17:44:14 +02:00
### Example of link crawler rules configuration
```python
CRAWL_RULES = {
# every rule is regex
# empty string means match any url
2021-04-25 14:13:44 +02:00
# allow rules with higher priority than deny
2021-04-24 17:44:14 +02:00
'translations.telegram.org': {
'allow': {
r'^[^/]*$', # root
r'org/[^/]*/$', # 1 lvl sub
r'/en/[a-z_]+/$' # 1 lvl after /en/
},
'deny': {
'', # all
}
},
'bugs.telegram.org': {
'deny': {
'', # deny all sub domain
},
},
}
```
### Current hidden urls list
```python
HIDDEN_URLS = {
# 'corefork.telegram.org', # disabled
'telegram.org/privacy/gmailbot',
'telegram.org/tos',
'telegram.org/tour',
'telegram.org/evolution',
'desktop.telegram.org/changelog',
}
```
2021-04-24 17:34:46 +02:00
### License
2021-04-26 12:27:28 +02:00
Licensed under the [MIT License ](LICENSE ).