* Fix PeerTube videos appearing with an erroneous “Edited at” marker
PeerTube videos have an `updated` field equal to `published`.
When processing an incoming activity that has the same value for `updated` and
`published`, assume this doesn't represent an actual edit.
* Please CodeClimate
* Fix error responses in `from` search prefix (addresses mastodon/mastodon#17941)
Using unsupported prefixes now reports a 422; searching for posts from an
account the instance is not aware of reports a 404. TODO: The UI for this
on the front end is abysmal.
Searching `from:username@domain` now succeeds when `domain` is the local
domain; searching `from:@username(@domain)?` now works as expected.
* Remove unused methods on new Error classes as they are not being used
Currently when `raise`d there are error messages being supplied, but
this is not actually being used. The associated `raise`s have been
edited accordingly.
* Remove needless comments
* Satisfy rubocop
* Try fixing tests being unable to find AccountFindingConcern methods
* Satisfy rubocop
* Simplify `from` prefix logic
This incorporates @ClearlyClaire's suggestion (see
https://github.com/mastodon/mastodon/pull/17963#pullrequestreview-933986737).
Accepctable account strings in `from:` clauses are more lenient than
before this commit; for example, `from:@user@example.org@asnteo +cat`
will not error, and return posts by @user@example.org containing the
word "cat". This is more consistent with how Mastodon matches mentions
in statuses. In addition, `from` clauses will not be checked for
syntatically invalid usernames or domain names, simply 404ing when
`Account.find_remote!` raises ActiveRecord::NotFound.
New code for this PR that is no longer used has been removed.
* Change e-mail notifications to only be sent when recipient is offline
Change the default for follow and mention notifications back on
* Add preference to always send e-mail notifications
* Change wording
* Fix edits with no actual changes being allowed locally
* Fix edits with no actual changes being allowed through ActivityPub
* Fix false positive changes caused by description processing in model
* Fix not recording poll expiration update
* Fix test
* Revert changes to ProcessStatusUpdateService
* Various fixes and improvements
* Fix code style issues
* Various changes and improvements
* Add guard clause
* Change how changes to media attachments are stored for edits
Fix not being able to re-order media attachments
* Fix not broadcasting updates when polls/media is changed through ActivityPub
* Various fixes and improvements
* Update app/models/report.rb
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Add tracking of media attachment description changes
* Change poll in status edit to have a structure closer to the real one
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Change design of federation pages in admin UI
* Fix query performance in instance media attachments measure
* Fix reblogs being included in instance languages dimension
* Add `/api/v1/accounts/familiar_followers` to REST API
* Change hide network preference to be stored consistently for local and remote accounts
* Add dummy classes to migration
* Apply suggestions from code review
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Add trending statuses
* Fix dangling items with stale scores in localized sets
* Various fixes and improvements
- Change approve_all/reject_all to approve_accounts/reject_accounts
- Change Trends::Query methods to not mutate the original query
- Change Trends::Query#skip to offset
- Change follow recommendations to be refreshed in a transaction
* Add tests for trending statuses filtering behaviour
* Fix not applying filtering scope in controller
Video files with variable framerates are converted to constant framerate videos
and the output framerate picked by ffmpeg is based on the original file's
container framerate (which can be different from the average framerate).
This means that an input video with variable framerate with about 30 frames per
second on average, but a maximum of 120 fps will be converted to a constant 120
fps file, which won't be processed by other Mastodon servers.
This commit changes it so that input files with VFR and a maximum framerate
above the framerate threshold are converted to VFR files with the maximum frame
rate enforced.
* Fix structured data parsing from links choking on bad data
- Fix og:url meta tag being prioritized over canonical link tag
- Fix structured data parsing choking on commented-out CDATA declarations
- Fix HTML entities in title, description, provider_name, author_name
- Change structured data parsing to attempt every JSON-LD script tag
* Remove unnecessary slash escapes from CDATA regex pattern
* Fix Sidekiq warnings about JSON serialization
This occurs on every symbol argument we pass, and every symbol key in hashes,
because Sidekiq expects strings instead.
See https://github.com/mperham/sidekiq/pull/5071
We do not need to change how workers parse their arguments because this has
not changed and we were already converting to symbols adequately or using
`with_indifferent_access`.
* Set Sidekiq to raise on unsafe arguments in test mode
In order to more easily catch issues that would produce warnings in production
code.
* Add support for editing for published statuses
* Fix references to stripped-out code
* Various fixes and improvements
* Further fixes and improvements
* Fix updates being potentially sent to unauthorized recipients
* Various fixes and improvements
* Fix wrong words in test
* Fix notifying accounts that were tagged but were not in the audience
* Fix mistake
* Add trending links
* Add overriding specific links trendability
* Add link type to preview cards and only trend articles
Change trends review notifications from being sent every 5 minutes to being sent every 2 hours
Change threshold from 5 unique accounts to 15 unique accounts
* Fix tests
For some reason, some misconfigured servers return an empty document when
queried over webfinger. Since an empty document does not lead to a parse
error, the error is not caught properly and triggers uncaught exceptions
later on.
This PR fixes that by immediately erroring out with `Webfinger::Error` on
getting an empty response.
* Switch from unmaintained paperclip to kt-paperclip
* Drop some compatibility monkey-patches not required by kt-paperclip
* Drop media spoof check monkey-patching
It's broken with kt-paperclip and hopefully it won't be needed anymore
* Fix regression introduced by paperclip 6.1.0
* Do not rely on pathname to call FastImage
* Add test for ogg vorbis file with cover art
* Add audio/vorbis to the accepted content-types
This seems erroneous as this would be the content-type for a vorbis stream
without an ogg container, but that's what the `marcel` gem outputs, so…
* Restore missing for_as_default method
* Refactor Attachmentable concern and delay Paperclip's content-type spoof check
Check for content-type spoofing *after* setting the extension ourselves, this
fixes a regression with kt-paperclip's validations being more strict than
paperclip 6.0.0 and rejecting some Pleroma uploads because of unknown
extensions.
* Please CodeClimate
* Add audio/vorbis to the unreliable set
It doesn't correspond to a file format and thus has no extension associated.
The auto-linking code basically rewrote the whole string escaping non-ascii
characters in an inefficient way, and building a full character offset map
between the unescaped and escaped texts before sending the contents to
TwitterText's extractor.
Instead of doing that, this commit changes the TwitterText regexps to include
valid IRI characters in addition to valid URI characters.
* Fix Delete and Create-related locks expiring too fast
Fixes#16238
By default, RedisLock expires after 10 seconds, which may not be enough to
process statuses, especially when those have attached media files.
This commit extends those 10 seconds to 15 minutes, which should be plenty
enough to handle any status, while being short enough to not waste many
sidekiq job retries in the exceedingly rare case in which a sidekiq process
would crash when processing a `Create` or `Delete`.
* Fix other RedisLock autorelease durations
Fixes#15645
- things that only perform a few simple database queries (e.g. finding and
saving a record) have been left unchanged, so they'll still use the default
10s duration
- things that perform significantly more complex database queries have been
changed to a 5 minutes timeout
- things that perform multiple HTTP queries have been changed to a 15 minutes
timeout
If a status with a hashtag becomes very popular, it stands to
reason that the hashtag should have a chance at trending
Fix no stats being recorded for hashtags that are not allowed
to trend, and stop ignoring bots
Remove references to hashtags in profile directory from the code
and the admin UI
* Fix media processing getting stuck on too much stdin/stderr
See thoughtbot/terrapin#5
* Remove dependency on paperclip-av-transcoder gem
* Remove dependency on streamio-ffmpeg gem
* Disable stdin on ffmpeg process
* Add tests
* Ensure deleted statuses are marked as such
* Save some redis memory by not storing URIs in delete_upon_arrival values
* Avoid possible race condition when processing incoming Deletes
* Avoid potential duplicate Delete forwards
* Lower lock durations to reduce issues in case of hard crash of the Rails process
* Check for `lock.aquired?` and improve comment
* Refactor RedisLock usage in app/lib/activitypub
* Fix using incorrect or non-existent sender for relaying Deletes
* Update devise-two-factor to unreleased fork for Rails 6 support
Update tests to match new `rotp` version.
* Update nsa gem to unreleased fork for Rails 6 support
* Update rails to 6.1.3 and rails-i18n to 6.0
* Update to unreleased fork of pluck_each for Ruby 6 support
* Run "rails app:update"
* Add missing ActiveStorage config file
* Use config.ssl_options instead of removed ApplicationController#force_ssl
Disabled force_ssl-related tests as they do not seem to be easily testable
anymore.
* Fix nonce directives by removing Rails 5 specific monkey-patching
* Fix fixture_file_upload deprecation warning
* Fix yield-based test failing with Rails 6
* Use Rails 6's index_with when possible
* Use ActiveRecord::Cache::Store#delete_multi from Rails 6
This will yield better performances when deleting an account
* Disable Rails 6.1's automatic preload link headers
Since Rails 6.1, ActionView adds preload links for javascript files
in the Links header per default.
In our case, that will bloat headers too much and potentially cause
issues with reverse proxies. Furhermore, we don't need those links,
as we already output them as HTML link tags.
* Switch to Rails 6.0 default config
* Switch to Rails 6.1 default config
* Do not include autoload paths in the load path
* Prepare Mastodon for zeitwerk autoloader (Rails 6)
Add inflections and rename/move a few classes.
In particular, app/lib/exceptions.rb and app/lib/sanitize_config.rb
were manually loaded while still in autoload paths.
* Add inflection for Url → URL
* Fix misuse of foreign_type
* Fix use of removed "add_template_helper"
* Use response.media_type instead of response.content_type in tests
* Fix CSV export controller test on Rails 6
Rails 6 sets a "filename*" field in the Content-Disposition header to
explicitly encode the filename as UTF-8.
This changes checks the first part of the Content-Disposition header so
it matches in both Rails 5 and Rails 6.
* Fix emoji formatting with Rails 6
* Make emoji output more idiomatic and robust
* Switch from redis-rails gem to built-in Rails redis cache storage
* Update twitter-text from 1.14 to 3.1.0
* Disable emoji parsing
* Properly depend on twitter-text for url detection
* Fix some URLs being wrongly detected client-side
* Add test for server-side validation of non-autolinkable URLs
* Fix server-side status length counting
* Fix URI of repeat follow requests not being recorded
In case we receive a “repeat” or “duplicate” follow request, we automatically
fast-forward the accept with the latest received Activity `id`, but we don't
record it.
In general, a “repeat” or “duplicate” follow request may happen if for some
reason (e.g. inconsistent handling of Block or Undo Accept activities, an
instance being brought back up from the dead, etc.) the local instance thought
the remote actor were following them while the remote actor thought otherwise.
In those cases, the remote instance does not know about the older Follow
activity `id`, so keeping that record serves no purpose, but knowing the most
recent one is useful if the remote implementation at some point refers to it
by `id` without inlining it.
* Add tests
Unlike locally-issued blocks, they weren't clearing follow
relationships in both directions, follow requests or notifications.
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Delete status records by batches of 50
* Do not precompute values that are only used once
* Do not generate redis events for removal of public toots older than two weeks
* Filter reported toots a priori for polls and status deletion
* Do not process reblogs when cleaning up public timelines
As in Mastodon proper, reblogs don't appear in public TLs
* Clean the deleted account's own feed in one go
* Refactor Account#clean_feed_manager and List#clean_feed_manager
* Delete instead of destroy a few more associations
* Fix preloading
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
Extract logic for determining ActivityPub inboxes to send deletes
to to its own class and explicitly include the person the status
replied to (even if not mentioned), people who favourited it, and
people who replied to it (though that one is still not recursive)
Nginx can be configured to bypass proxy cache when a special header
is in the request. If the response is cacheable, it will replace
the cache for that request. Proxy caching of media files is
desirable when using object storage as a way of minimizing bandwidth
costs, but has the drawback of leaving deleted media files for
a configured amount of cache time. A cache buster can make those
media files immediately unavailable. This especially makes sense
when suspending and unsuspending an account.
When failing to fetch the target account, the ProcessingWorker fails
as expected, but since it hasn't cleared the `move_in_progress` flag,
the next attempt at processing skips the `Move` activity altogether.
This commit changes it to clear the flag when encountering any
unexpected error on fetching the target account. This is likely to
occur because, of, e.g., a timeout, when many instances query the
same actor at the same time.
* Add support for followers synchronization on the receiving end
Check the `collectionSynchronization` attribute on `Create` and `Announce`
activities and synchronize followers from provided collection if possible.
* Add tests for followers synchronization on the receiving end
* Add support for follower synchronization on the sender's end
* Add tests for the sending end
* Switch from AS attributes to HTTP header
Replace the custom `collectionSynchronization` ActivityStreams attribute by
an HTTP header (`X-AS-Collection-Synchronization`) with the same syntax as
the `Signature` header and the following fields:
- `collectionId` to specify which collection to synchronize
- `digest` for the SHA256 hex-digest of the list of followers known on the
receiving instance (where “receiving instance” is determined by accounts
sharing the same host name for their ActivityPub actor `id`)
- `url` of a collection that should be fetched by the instance actor
Internally, move away from the webfinger-based `domain` attribute and use
account `uri` prefix to group accounts.
* Add environment variable to disable followers synchronization
Since the whole mechanism relies on some new preconditions that, in some
extremely rare cases, might not be met, add an environment variable
(DISABLE_FOLLOWERS_SYNCHRONIZATION) to disable the mechanism altogether and
avoid followers being incorrectly removed.
The current conditions are:
1. all managed accounts' actor `id` and inbox URL have the same URI scheme and
netloc.
2. all accounts whose actor `id` or inbox URL share the same URI scheme and
netloc as a managed account must be managed by the same Mastodon instance
as well.
As far as Mastodon is concerned, breaking those preconditions require extensive
configuration changes in the reverse proxy and might also cause other issues.
Therefore, this environment variable provides a way out for people with highly
unusual configurations, and can be safely ignored for the overwhelming majority
of Mastodon administrators.
* Only set follower synchronization header on non-public statuses
This is to avoid unnecessary computations and allow Follow-related
activities to be handled by the usual codepath instead of going through
the synchronization mechanism (otherwise, any Follow/Undo/Accept activity
would trigger the synchronization mechanism even if processing the activity
itself would be enough to re-introduce synchronization)
* Change how ActivityPub::SynchronizeFollowersService handles follow requests
If the remote lists a local follower which we only know has sent a follow
request, consider the follow request as accepted instead of sending an Undo.
* Integrate review feeback
- rename X-AS-Collection-Synchronization to Collection-Synchronization
- various minor refactoring and code style changes
* Only select required fields when computing followers_hash
* Use actor URI rather than webfinger domain in synchronization endpoint
* Change hash computation to be a XOR of individual hashes
Makes it much easier to be memory-efficient, and avoid sorting discrepancy issues.
* Marginally improve followers_hash computation speed
* Further improve hash computation performances by using pluck_each
There are edge cases where requests to certain hosts timeout when
using the vanilla HTTP.rb gem, which the goldfinger gem uses. Now
that we no longer need to support OStatus servers, webfinger logic
is so simple that there is no point encapsulating it in a gem, so
we can just use our own Request class. With that, we benefit from
more robust timeout code and IPv4/IPv6 resolution.
Fix#14091
* Add bell button
Fix#4890
* Remove duplicate type from post-deployment migration
* Fix legacy class type mappings
* Improve query performance with better index
* Fix validation
* Remove redundant index from notifications
* Check for and record reblog info atomically
Instead of using ZREVRANK to determine whether a reblog is a new reblog or not,
use ZADD's NX option to perform the check/addition option atomically.
* Replace ZREVRANK call with ZSCORE key which is more efficient
* Make tests a bit stricter
* Fix off-by-one
* Add database support for list show-reply preferences
* Add backend support to read and update list-specific show_replies settings
* Add basic UI to set list replies setting
* Add specs for list replies policy
* Switch "cycling" reply policy link to a set of radio inputs
* Capitalize replies_policy strings
* Change radio button design to be consistent with that of the directory explorer
Follow-up to #14359
In the case of limited toots, the receiver may not be explicitly part of the
audience. If a specific user's inbox URI was specified, it makes sense to
dereference the toot from the corresponding user, instead of trying to find
someone in the explicit audience.
* Fix not handling Undo on some activity types when they aren't inlined
When receiving an Undo for a non-inlined activity, try looking it up in
database using the URI. The queries are ad-hoc because we don't have a global
index of object URIs, and not all activity types are stored in database with
an index on their URI.
Announces are just statuses, and have an index on URIs, so this check can
be done efficiently.
Accepts cannot be handled at all because we don't record their URI at any
point.
Follows don't have an index on URI, but they have an index on the issuing
account, which should make such queries largely manageable.
Likes don't have an index on URI, they have an index on the issuing account,
but the number of favs per account may be very high, so I decided not to
handle that.
Blocks don't have an index on URI, but they have an index on the issuing
account, which should make such queries largely manageable.
In all cases, if an Undo could not be handled properly, we call `delete_later!`
because that does not require us to know more than the URI of the undone
property.
* Add tests
* Make newer blocks overwrite older ones
Allows re-synchronizing block info by re-blocking and un-blocking again
when the original Undo Block has been lost.
* Fix the local group's followers collection
* Fix to accept followed relayed_through_account
* Add local delivery to the group's followers
* Fix code style
* Revert "Add local delivery to the group's followers"
This reverts commit 3237effc19.
- Change audio files to not be stripped of metadata
- Automatically extract cover art from audio if it exists
- Add `thumbnail` parameter to `POST /api/v1/media`, `POST /api/v2/media` and `PUT /api/v1/media/:id`
- Add `icon` to represent it in attachments in ActivityPub
- Fix `preview_url` containing URL of missing missing image when there is no thumbnail instead of null
- Fix duration of audio not being displayed on public pages until the file is loaded
Thai does not separate words by spaces, so I figured out it should be
in 'reliable characters regexp' that denotes languages that do the same.
Related #13891.
* FIX: filters ignore media descriptions
* remove parentheses to make codeclimate happy
* combine the text and run the regular expression only once.
https://github.com/tootsuite/mastodon/pull/13837#discussion_r431752581
* Fix use of “filter” instead of “compact”, fix coding style issues
Co-authored-by: Thibaut Girka <thib@sitedethib.com>
* Improve RSS entries for statuses
- Render polls in both accounts and tags serializers
- Refactor RSS serializers
- Change title preview to include ellipsis when truncated
- Change title preview to show CW instead of toot text
- Add tests
* Remove title from OEmbed serialization
Twitter doesn't serialize title either, and tihs allows us to move the
title formatting code to the RSS serializers.
* Fix PostgreSQL load when linking in announcements
Fixes#13245 by caching status lookups
Since statuses are supposed to be known already and we only
need their URLs and a few other things, caching them should
be fine.
Since it's only used by announcements so far, there won't
be much statuses to cache.
* Perform status lookup when saving announcements, not when rendering them
* Change EntityCache#status to fetch URLs instead of looking into the database
* Move announcement link lookup to publishing worker
* Address issues pointed out during review
Change `all_day` to be a visual client-side cue only
Publish immediately if `scheduled_at` is in the past
Add `published_at` and `updated_at` to announcements JSON
* Add announcements
Fix#11006
* Add reactions to announcements
* Add admin UI for announcements
* Add unit tests
* Fix issues
- Add `with_dismissed` param to announcements API
- Fix end date not being formatted when time range is given
- Fix announcement delete causing reactions to send streaming updates
- Fix announcements container growing too wide and mascot too small
- Fix `all_day` being settable when no time range is given
- Change text "Update" to "Announcement"
* Fix scheduler unpublishing announcements before they are due
* Fix filter params not being passed to announcements filter
- update http gem to avoid errors
- update blurhash gem to avoid shared object loading error
- update goldfinger gem so the http gem could be updated
- update json gem to avoid warnings
* Fix wrong grouping in Twitter valid_url regex
* Add support for xmpp URIs
Fixes#9776
The difficult part is autolinking, because Twitter-text's extractor does
some pretty ad-hoc stuff to find things that “look like” URLs, and XMPP
URIs do not really match the assumptions of that lib, so it doesn't sound
wise to try to shoehorn it into the existing regex.
This is why I used a specific regex (very close, although slightly more
permissive than the RFC), and a specific scan function (a simplified version
of the generalized one from Twitter).
* Remove leading “xmpp:” from auto-linked text
* Remove “protocol” argument and return value, as only ActivityPub is supported
* Remove FetchRemoteAccountService, only use ActivityPub::FetchRemoteAccountService
* Fix tests
* Revert "Fix ignoring whole status because of one invalid hashtag (#11621)"
This reverts commit dff46b260b.
* Fix statuses being rejected because of invalid hashtag names
* Add spec for invalid hashtag names in statuses
* Add test for featured tags controller
This adds support for Event AP type in Mastodon. Events are converted
into toots by taking their title (AS name) and their URL (AP ID). Event
picture is also brought in if available.
Testable by fetching event content from https://test.mobilizon.org
Signed-off-by: Thomas Citharel <tcit@tcit.fr>
* Show badge on group actor in WebUI
* Do not notify in case of by following group actor
* If you mention group actor, also mention group actor followers
* Relax characters that can be used in username (same as Application)
* Revert "Relax characters that can be used in username (same as Application)"
This reverts commit 7e10a137b8.
* Delete display_name method
Some ActivityPub servers refuse to embed remote objects into their own
output. This is because they are not the authoritative source for these
objects, and as such embedding them is always a waste of space. The
follow request and follow models contain a URI, so this can be used to
match them.
Fetching statuses from all followed accounts at once takes too long
within Postgres. Fetching them one by one and merging in Ruby
could be a lot less resource-intensive
Because the query for dynamically fetching the home timeline is so
heavy, we can no longer offer it when the home timeline is missing
* Add voters count to polls
* Add ActivityPub serialization and parsing of voters count
* Add support for voters count in WebUI
* Move incrementation of voters count out of redis lock
* Reword “voters” to “people”
* Add nodeinfo endpoint
* dont commit stuff from my local dev
* consistant naming since we implimented 2.1 schema
* Add some additional node info stuff
* Add nodeinfo endpoint
* dont commit stuff from my local dev
* consistant naming since we implimented 2.1 schema
* expanding this to include federation info
* codeclimate feedback
* CC feedback
* using activeserializers seems like a good idea...
* get rid of draft 2.1 version
* Reimplement 2.1, also fix metaData -> metadata
* Fix metaData -> metadata here too
* Fix nodeinfo 2.1 tests
* Implement cache for monthly user aggregate
* Useless
* Remove ostatus from the list of supported protocols
* Fix nodeinfo's open_registration reading obsolete setting variable
* Only serialize domain blocks with user-facing limitations
* Do not needlessly list noop severity in nodeinfo
* Only serialize domain blocks info in nodeinfo when they are set to be displayed to everyone
* Enable caching for nodeinfo endpoints
* Fix rendering nodeinfo
* CodeClimate fixes
* Please CodeClimate
* Change InstancePresenter#active_user_count_months for clarity
* Refactor NodeInfoSerializer#metadata
* Remove nodeinfo 2.1 support as the schema doesn't exist
* Clean-up
* Change silenced accounts to require approval on follow
* Also require approval for follows by people explicitly muted by target accounts
* Do not auto-accept silenced or muted accounts when switching from locked to unlocked
* Add `follow_requests_count` to verify_credentials
* Show “Follow requests” menu item if needed even if account is locked
* Add tests
* Correctly reflect that follow requests weren't auto-accepted when local account is silenced
* Accept follow requests from user-muted accounts to avoid leaking mutes
The reason for unattaching media instead of removing it is to support
delete & redraft functionality, but remote or staff-removed statuses
will never be redrafted, so the media should be deleted immediately
- Restrict followers counts to local users to minimize local advantage
- Fix emoji shortcodes causing error in search
- Fix search syntax parse errors not being caught
* Play animated custom emoji on hover in status
* Play animated custom emoji on hover in display names
* Play animated custom emoji on hover in bios/bio fields
* Add support for animation on hover on public pages emojis too
* Fix tests
* Code style cleanup
* Add support for an instance actor
* Skip username validation for local Application accounts
* Add migration script to create instance actor
* Make Codeclimate happy
* Switch to id -99 for instance actor
* Remove unused `icon` and `image` attributes from instance actor
* Use if/elsif/else instead of return + ternary operator
* Add instance actor to fresh installs
* Use instance actor as instance representative
Use instance actor for forwarding reports, relay operations, and spam
auto-reporting.
* Seed database in test environment
* Fix single-user mode
* Fix tests
* Fix specs to accomodate for an extra `Account`
* Auto-reject follows on instance actor
Following an instance actor might make sense, but we are not handling that
right now, so auto-reject.
* Fix webfinger lookup and serialization for instance actor
* Rename instance actor
* Make it clear in the HTML view that the instance actor should not be blocked
* Raise cache time for instance actor as there's no dynamic content
* Re-use /about/more with a flash message for instance actor profile
* Add a spam check
* Use Nilsimsa to generate locality-sensitive hashes and compare using Levenshtein distance
* Add more tests
* Add exemption when the message is a reply to something that mentions the sender
* Use Nilsimsa Compare Value instead of Levenshtein distance
* Use MD5 for messages shorter than 10 characters
* Add message to automated report, do not add non-public statuses to
automated report, add trust level to accounts and make unsilencing
raise the trust level to prevent repeated spam checks on that account
* Expire spam check data after 3 months
* Add support for local statuses, reduce expiration to 1 week, always create a report
* Add content warnings to the spam check and exempt empty statuses
* Change Nilsimsa threshold to 95 and make sure removed statuses are removed from the spam check
* Add all matched statuses into automatic report
* Disable incorrect check for hidden services in Socket
Hidden services can only be accessed with an HTTP proxy, in which
case the host seen by the Socket class will be the proxy, not the
target host.
Hidden services are already filtered in `Request#initialize`.
* Use our Socket class to connect to HTTP proxies
Avoid the timeout logic being bypassed
* Add support for IP addresses in Request::Socket
* Refactor a bit, no need to keep the DNS resolver around
* Remove Salmon and PubSubHubbub endpoints
* Add error when trying to follow OStatus accounts
* Fix new accounts not being created in ResolveAccountService
* Add request pool to improve delivery performance
Fix#7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
* Change domain blocks to automatically support subdomains
If a more authoritative domain is blocked (example.com), then the
same block will be applied to a subdomain (foo.example.com)
* Match subdomains of existing accounts when blocking/unblocking domains
* Improve code style
* Add responsive panels to the single-column layout
* Fixes
* Fix not being able to save the preference
* Fix code style issues
* Set max-height on the compose textarea and add a link to relationship manager
* Prevent silenced local users from notifying remote users not following them
This is an attempt to extend the local restrictions of silenced users to the
federation.
* Add tests
* Add tests for making sure private status don't get sent over OStatus
* Add blurhash
* Use fallback color for spoiler when blurhash missing
* Federate the blurhash and accept it as long as it's at most 5x5
* Display unknown media attachments as blurhash placeholders
* Improve style of embed actions and spoiler button
* Change blurhash resolution from 3x3 to 4x4
* Improve dependency definitions
* Fix code style issues
* Backend changes for custom emoji support in poll options
* Serialize poll emojis in REST API
* Render custom emojis in poll options
* Render custom emoji in poll options on public pages
* create account_identity_proofs table
* add endpoint for keybase to check local proofs
* add async task to update validity and liveness of proofs from keybase
* first pass keybase proof CRUD
* second pass keybase proof creation
* clean up proof list and add badges
* add avatar url to keybase api
* Always highlight the “Identity Proofs” navigation item when interacting with proofs.
* Update translations.
* Add profile URL.
* Reorder proofs.
* Add proofs to bio.
* Update settings/identity_proofs front-end.
* Use `link_to`.
* Only encode query params if they exist.
URLs without params had a trailing `?`.
* Only show live proofs.
* change valid to active in proof list and update liveness before displaying
* minor fixes
* add keybase config at well-known path
* extremely naive feature flagging off the identity proof UI
* fixes for rubocop
* make identity proofs page resilient to potential keybase issues
* normalize i18n
* tweaks for brakeman
* remove two unused translations
* cleanup and add more localizations
* make keybase_contacts an admin setting
* fix ExternalProofService my_domain
* use Addressable::URI in identity proofs
* use active model serializer for keybase proof config
* more cleanup of keybase proof config
* rename proof is_valid and is_live to proof_valid and proof_live
* cleanup
* assorted tweaks for more robust communication with keybase
* Clean up
* Small fixes
* Display verified identity identically to verified links
* Clean up unused CSS
* Add caching for Keybase avatar URLs
* Remove keybase_contacts setting
* Fix poll update handler calling method was that was not available
Fix regression from #10209
* Refactor VoteService
* Refactor ActivityPub::DistributePollUpdateWorker and optimize it
* Fix typo
* Fix typo
* Process incoming poll tallies update
* Send Update on poll vote
* Do not send Updates for a poll more often than once every 3 minutes
* Include voters in people to notify of results update
* Schedule closing poll worker on poll creation
* Add new notification type for ending polls
* Add front-end support for ended poll notifications
* Fix UpdatePollSerializer
* Fix Updates not being triggered by local votes
* Fix tests failure
* Fix web push notifications for closing polls
* Minor cleanup
* Notify voters of both remote and local polls when those close
* Fix delivery of poll updates to mentioned accounts and voters
* When serializing polls over OStatus, serialize poll options to text
* Do the same for RSS feeds
* Use “[ ] ” as a prefix for poll options instead of “- ”
* Add polls
Fix#1629
* Add tests
* Fixes
* Change API for creating polls
* Use name instead of content for votes
* Remove poll validation for remote polls
* Add polls to public pages
* When updating the poll, update options just in case they were changed
* Fix public pages showing both poll and other media
* Fetch up to 5 replies when discovering a new remote status
This is used for resolving threads downwards. The originating
server must add a “replies” attributes with such replies for it to
be useful.
* Add some tests for ActivityPub::FetchRepliesWorker
* Add specs for ActivityPub::FetchRepliesService
* Serialize up to 5 public self-replies for ActivityPub notes
* Add specs for ActivityPub::NoteSerializer
* Move exponential backoff logic to a worker concern
* Fetch first page of paginated collections when fetching thread replies
* Add specs for paginated collections in replies
* Move Note replies serialization to a first CollectionPage
The collection isn't actually paginable yet as it has no id nor
a `next` field. This may come in another PR.
* Use pluck(:uri) instead of map(&:uri) to improve performances
* Fix fetching replies when they are in a CollectionPage
`::FetchRemoteAccountService` is not `ActivityPub::FetchRemoteAccountService`,
its second argument is the pre-fetched body. Passing `id: false` actually passed
a `Hash` as the prefetched body, instead of properly resolving unknown remote
accounts.
* Filter incoming Announce activities by relation to local activity
Reject if announcer is not followed by local accounts, and is not
from an enabled relay, and the object is not a local status
Follow-up to #10005
* Fix tests
Reject those from accounts with no local followers, from relays
that are not enabled, which do not address local accounts and are
not replies to accounts that do have local followers
* When self-boosting, embed original toot into Announce serialization
* Process unknown self-boosts from Announce object if it is more than an URI
* Add some self-boost specs
* Only serialize private toots in self-Announces
* Add Tombstone model to remember object deletion
* Do not recreate a status if it has been deleted
* Record Tombstone for remote deleted items
Also, only record deleted items from same-host actors
* Clear an user's tombstones when their key change
* Ensure blocked user unfollows blocker if Block/Undo Block are processed out of order
* Add specs for Block causing unfollow and for out-of-order Block + Undo
* Do not LDS-sign Follow, Accept, Reject, Undo, Block
* Do not use LDS for Create activities of private toots
* Minor cleanup
* Ignore unsigned activities instead of misattributing them
* Use status.distributable? instead of querying visibility directly
* Add setting to not aggregate reblogs
Fixes#9222
* Handle cases where user is nil in add_to_home and add_to_list
* Add hint for setting_aggregate_reblogs option
* Reword setting_aggregate_reblogs label
* Fix connect timeout not being enforced
The loop was catching the timeout exception that should stop execution, so the next IP would no longer be within a timed block, which led to requests taking much longer than 10 seconds.
* Use timeout on each IP attempt, but limit to 2 attempts
* Fix code style issue
* Do not break Request#perform if no block given
* Update method stub in spec for Request
* Move timeout inside the begin/rescue block
* Use Resolv::DNS with timeout of 1 to get IP addresses
* Update Request spec to stub Resolv::DNS instead of Addrinfo
* Fix Resolve::DNS stubs in Request spec