Home

Awesome

dril-archive

** DEAD. FUCK ELON MUSK.**

Due to the uncertain future of Twitter dot com many have decided to pack their bags to be ready to move on in case the site ceases to exist. People started to worry for the whereabouts of dril's tweets - a modern prophet - "who could only emerge on an app like Twitter".

I started to write a backup tool to augment the incomplete twitter archive downloads and of course dril's feed was used for testing purposes. Around the same time, Nick Farruggia shared a Google spreadsheet with every dril tweet, that i then used as basis to compile a JSON from the Twitter web API. Eventually, the archive, along with the scripts i used to download and compile ended up in this repository in order to compile the data into other formats and run a static website via GitHub pages.

Downloads

Ok, this is a bit messy, but releases are hardly feasible for this type of repo. Instead, each build creates an artifact with the files that are committed to the gh-pages branch for the static website. You can download these artifacts under "actions/Build" and filter by "scheduled". Click on the latest workflow and scroll down to "artifacts" - there you are!

Requirements

Installation

PHP for Windows

It might be necessary to provide a CA file for OpenSSL:

PHP for Linux

A PHP installation guide can be found over here on digitalocean.com.

Library installation

Usage

Initial build

The official timeline/search API endpoints only return up to 3200 tweets unless you have academic access. So the initial build uses the undocumented/inoffical adaptive search API that's being used by Twitter's web search, which means, the tokens from the developer account won't work here. Obtaining the credentials is a bit messy and described in the following steps:

The x-guest-token is valid for about 2 hours, the bearer token at least for a day.

Now that everything's set up, you can run php build-clean.php in the ./cli directory and watch the console for a while... :tea: The output will be stored under /output and you can open the index.html in a browser, the API responses are cached under /.build/<query-value>.

Incremental update

The incremental update utilizes the v1 API search endpoint, which returns 20 results with standard access. It will also update the user profiles in the timeline. The query should be the same as the one used to run the initial build. Run php incremental-update.php in ./cli and grab :coffee: (this script is also used in the daily-run workflow).

Counter update

The counter update uses the v2 tweets endpoint with the tweet.fields=public_metrics expansion to update the stale counter values of an existing timeline. Run php update-counts.php in ./cli :cake:

Disclaimer

The scripts to create the archive are licensed under the WTFPL.<br> All tweets and media remain under copyright by their respective creators.