How I Auto-Medicate My Online Consumption

Since 2013 I have actively battled my own internet usage. The main culprits to a bad experience are unhealthy and/or endless content.

My first attempts to fix this involved blocking sites completely. This didn’t work as no matter what lock I attached, I always had a key or alternative route. I would often block the sites via my router, browser extensions, or host files on my computer. I’d always find a way around when lacking willpower.

I found that filtering in combination with redirection proved a better strategy.

Below are some tools and tricks I’ve learned to automate / self govern my web usage.

Filtering Reddit

A strategy that works for many people is to only subscribe to subreddits they like so their /r/frontpage becomes a great experience. I was unable to stop myself from going to /r/all and /r/popular which became the endless pit.

A better strategy has me filter Reddit using Reddit Enhancement Suite (RES), a browser extension that gives advanced control over the older Reddit layout. I filter to the extreme and have ~2000 subreddit’s filtered from /r/all and /r/popular.

I initially blocked out hate groups that proliferated a few years back (most have now been permanently banned), then moved onto filtering some massively popular subreddits that suffer from too many people. It felt great, so I learned the RES keyboard shortcuts to make it easier (period key, f key, enter).

I now block any subreddit that:

  • Is too large of a community to be useful, such as /r/pics or /r/funny
  • Is “news,” meme, or anime centric
  • Is for citizens of a specific country (/r/sweden)
  • Is sports ball related
  • Has lost its original special sauce (such as /r/wholesomememes)
  • Is injury or cringe focused
  • etc…

In addition, I have a safety check to limit the amount of Reddit usage per day. This helps for the rough days. I use Leechblock to redirect to my Trello to-do list once I hit 100 minutes in a given day. I’ve experimented reducing the time limit, but this works for the worst case.

Distraction-Free YouTube

The same strategy works for YouTube. Instead of blocking, I filter.

To filter YouTube I use DF Youtube which disables autoplay, removes the sidebar, hides feeds, playlists, comments, etc. A video page only shows the video (I use this addon to always show the video in theater mode).

If I go to YouTube’s home page, I redirect (using the Redirector extension) to the subscriptions page. The content here is curated and intentional. The rest of endless and unhealthy YouTube is hidden.

Filtering Hacker News

Hacker News does a great job cleaning up unhealthy content, so to filter someone endless content I use the Redirector extension to always redirect to hckr news, an alternative reader that shows only the top 20 links of the day. This keeps it focused and manageable.

Redirecting News Sites

Leechblock has a list of all major news sites. If I visit any of these sites, it redirects to the text version of NPR, a fast loading alternative with no JavaScript or image clutter.

Podcasting Setup For My Mom

I live in Minnesota. My mom lives in NY. I set her up with a way to podcast and here are some notes.

My mom and her friend wanted to start a podcast. Initially they wanted a video channel on YouTube but I was rather concerned that the level of quality would be abysmal. A podcast seemed more manageable, being remote. I also value frugality when possible, so price was important.

We purchased 2 mics (Behringer Ultravoice Xm8500) and 2 budget arm/clamp-stands to use for their lounge chat setup.

Portable Recorder Fail

When browsing options, the TASCAM DP-006 seemed like a good choice. It was portable, had physical buttons, and didn’t need a computer nearby. The last point was the most enticing for her.

This setup wasn’t ideal because:

– The recording box is too feature packed. It has actual mixing features built in. The menu was confusing, but eventually we had step-by-step instructions written down.

– Ultimately, the internal amp could not provide enough power to the microphones.

Audio Interface

The more classic budget setup is buying an audio interface and using audacity. We purchased the BEHRINGER U-PHORIA UMC202HD audio interface. The MIDAS pre-amps on the UMC202HD provide more than enough power to the mics.

The microphones go into the audio interface, and the audio interface plugs into a computer. From there, we use Audacity to do the recordings.


Cost: Free

We still had to write down instructions for Audacity, but it was much easier this time. It also had the benefit of being one help-me-remote-desktop call away.

These are her notes to use:

  1. Open Audacity.
  2. Do initial Save of Project [File -> Save Project] into the Carlos/Drive/Mom_Podcast folder.
  3. Make sure audio interface is seen (Windows WASAPI) and MIC UMC202HD.
  4. Monitor the sound levels by clicking the top right channels and looking at the bouncing audio level.
  5. Start recording with the red button. Do podcast. Press the square button to stop. Save project again. Go to beginning and listen to it.

Once she saves the files, they get auto-saved into Google Drive and I am able to access them in Minnesota. I open the Audacity file and split the single stereo track into 2 tracks. I do this because the audio interface records the 2 mics on 1 stereo track. One mic on left channel and one mic on right channel.

I then remove noise with Audacity’s Noise Reduction feature. For this I need 10 seconds of just the room noise with no talking. Audacity removes this room noise and keeps the voice. Audacity shines here.

I then export the WAV files from Audacity into a Work-In-Progress folder.


I own an existing license, but the free/lite version for Ableton Digital Audio Workstation should be enough.

The audio effects I use on the master track are:

  • Hi-Pass filter that cuts out all low frequencies.
  • Utility, which increases the volume
  • A compressor with 2:1 Ratio, 2.00ms Attack, and 15.00ms Release
  • An EQ that raises the frequencies my mom’s voice registers at
  • A utility that raises the volume again
  • A limiter that prevents any sounds above -3.00dB from passing

The orange track is for incidental music to begin the podcast, use as transition music when the conversation needs it, and some music for the end.

We usually do the edits together because it’s a blast to laugh at the quirky laughs on loop and coughs that destroy a sentence.

The little clip that looks out of place on the middle track was actually recorded during the editing. The original recording / thought was bad audio, so we recorded another take on the fly.


I use to upload the audio files. This service is free, well made, and now owned by Spotify.

Total Cost

2 Microphones: $46

2 Stands: $40

Audio Interface: $116

Ableton: $0 (Though, I have an existing license for pro version.)

Audacity: $0 $0

Dick Hallorann’s “Nude Lady w/ Afro” Poster From The Shining”

This is my favorite and most unlikely serendipitous thing in life so far. The story is short and sweet as well.

I saw The Shining in high school and really wanted a copy of a poster hanging on the wall in a motel room scene featuring Dick Hallorann. I searched the Internet and posted on fan sites, but no one seemed to have an idea where to get a copy.

Fast forward about five or six years and I was visiting Manuel Meurer a college friend in Berlin, Germany during a backpacking trip.

I walked into his house and my mouth dropped. He had an original copy hanging on his wall. His father bought it in the 70’s and passed it down to him. He didn’t know it was a prop used in the movie, which added something extra to what he owned.

Manuel, a person I’d consider one of the good ones for various reasons, took it to a print shop and gave me a high resolution scan of the poster.

I spent about 80 hours editing the tarnished scan and now own the only (maybe) high resolution digital copy. I tried to contact the original press and purchase the rights to the photo for redistribution, but they have no info on the poster.

I’m ready to show the image, but I watermarked it because I’m not yet willing to give up the rarity.

Scaling My Partner’s Poetry (Part 2)

My goal today was to help Kaitlin know which poems she has already posted to Instagram in image format. The process to do so consisted of downloading all of her uploaded images along with the associated URL. Next, I ran the images through an Image-to-Text / OCR tool called as Tesseract. I then compared each extracted text with the existing poem files trying to find the match. Once I knew which Instagram posts matched which poems, I added the Instagram URL to the front matter created in part 1.

I was not super concerned with creating the perfect code, and I’m certain that improvements could be made to any code below. This script was only run once so it just needed to function.

Step 1: Downloading Her Instagram Posts

I used a python program called Instalooter. This program can download all images and video associated to an Instagram user. Once I installed the program on my Ubuntu laptop, I ran the following command in my terminal:

{{< cmd >}} instalooter user kaitquinnpoetry -d {{< /cmd >}}

The -d flag is used to dump meta into a .JSON file alongside the downloaded images.

Step 2: Use Tesseract To Extract Text From Instagram Posts

The following ruby code iterates through each Instagram post .JSON file and extracts text from the images using the RTesseract gem (a wrapper for the real Tesseract). The output for the JSON file location, Instagram URL, and extracted text is sent to a .CSV file.

# frozen_string_literal: true

require 'bundler/inline'

gemfile do
  source ''
  gem 'rtesseract'

class Gram
  attr_accessor :shortcode, :location, :text

  def initialize
    @text =

def gram_posts
  Dir.glob(File.join(Dir.home, 'Code', 'poetry', 'insta', '*.json'))

@posts = []
gram_posts.sort.reverse.each do |gram_post|
  @posts << do |g|
    gram_json = JSON.parse(
    g.location = gram_post
    g.shortcode = gram_json['shortcode']
    selected_images = { |i| File.basename(i)[0..8] == File.basename(gram_post)[0..8] }
    g.text << { |image| }.join("\n")
    puts g.text
    # binding.pry
end'OCR_output.csv', 'w') do |csv|
  @posts.each do |post|
    csv << [post.location, post.shortcode, post.text]

Step 3: Match Poem To Instagram Post

I had the original poem files in Markdown and text extracted from the Instagram posts. I had to compare the text and find matches. This connects the dots and gets me closer to the goal of putting the Instagram URL into the poem’s Markdown file front matter.

For this, I found a gem called similar_text and wrote a very inefficient script to compare the texts.

If the similarity is greater than 35%, I output the “match found” information to a .CSV file. I got “35%” from trial and error. At 35%, there were not many more false positives.

require 'csv'

@ocr_data = []
# At this point I manually added headers to the OCR_output.csv file
CSV.foreach('OCR_output.csv', headers: true) { |row| @ocr_data << row.to_hash }
@poems = []
@matches = []
poem_files.each do |poem_file|
  poem_text =
  @ocr_data.each do |ocrtext|
    similar_percent = poem_text.similar(ocrtext['text'])
    next unless similar_percent > 35
    @matches << [poem_file, ocrtext['slug'], similar_percent]
end'matches.csv', 'w') do |csv|
  @matches.each do |match|
    csv << [match[0], match[1], match[2]]

The CSV matches.csv now contains the file location, Instagram URL, and similarity percentage.

Step 4: Add Front Matter To Poem (Markdown File)

Now that I know which Instagram images matched which poem (Markdown) I can add the following front matter:

...[other front matter above]...
instagram_url: <theURL>

For this, I used the PadUtils gem that has great functions for inserting lines at different specified areas in a text file. The following code inserts the instagram_url to the poem’s front matter:

# frozen_string_literal: true

require 'bundler/inline'

gemfile do
  source ''
  gem 'pad_utils'

@matches = []
# At this point I manually added headers to the matches.csv file
CSV.foreach('matches.csv', headers: true) { |row| @matches << row.to_hash }

@matches.each do |match|
  poem = match['poem_location']
  slug = match['slug']
  puts "Slug: #{slug} - Poem: #{poem}"
  PadUtils.insert_before_last(original: poem, tag: '---', text: "\ninstagram_url: " + slug + "\n")

Remind me, what was the point?

Kaitlin now knows which poems have been posted to Instagram and has a link to where they are posted. In the event she wants to publish those poems elsewhere, she can quickly get to that Instagram post to make it private. There may be other undiscovered benefits to having that information more handy.

Scaling My Partner’s Poetry (Part 1)

The preface here is that my girlfriend is a prolific poet. She has written 2 or 3 poems each day for the better part of a year. Her Instagram has grown to about 2,000 followers rather quickly.

Her poems were all stored in Google Docs and hard for me to access programmatically. I’d often say things like, “Yeah but if I could access your poems I could do XYZ with a couple hours and my favorite programming language.”

My first idea for scaling was to have auto-generated Instagram images that were tailored to her style. That program morphed into what is now, an exploratory side project to try out Vue.js (frontend) + Rails API (backend). I stopped working on it because version 2 would have to function like a paired down to be useful.

In Part 2 I add the Instagram URL to the Markdown poem file.

Escaping Google Docs Format

To make her poetry scalable my first step was to leave the Google Docs proprietary format using Google Takeout. This converted them to .txt files which are easier to manage.

Next, I used Bulk Rename Utility to change the extensions of the .txt files to .md (Markdown).

Adding Front Matter Programmatically

I wanted to add Front Matter to the Markdown files to ensure the metadata about the poem stays with the poem itself. Somewhere along the process the Created and Last modified file attributes were overwritten. I could have retraced my steps and tried to prevent it, but I still had a previously made CSV file containing the dates. I decided to use that instead of backtracking.

I made the following script (Ruby language) to prepend frontmatter to about 900 or so poems. I grabbed the title_magic function from here, and the file_prepend function from here.

# frozen_string_literal: true

# add_frontmatter.rb

require 'find'
require 'fileutils'
require 'date'
require 'csv'

$data = []
CSV.foreach('dates.csv', headers: true) { |row| $data << row.to_hash }

def start
poems = Dir.glob(File.join(Dir.home, 'Drive', 'Poetry', '**', '*.md'))
poems.each { |poem| add_frontmatter(poem) }

def add_frontmatter(poem)

string_to_prepend = '---' + "\n"
'title: ' + poem_title(poem) + "\n"
'date: ' + poem_date(poem) + "\n"
'tags: \[\]' + "\n"
'---' + "\n"
file_prepend(poem, string_to_prepend)

def poem_title(poem)
title_magic(File.basename(poem, '.md').gsub('_', ' '))

def poem_date(poem)
original_date = $ do |data|
File.join(Dir.home, 'Drive', data['path']) == poem

if original_date.empty?
elsif original_date.count == 1

def file_prepend(file, str)
new_contents = '', 'r') do |fd|
contents =
new_contents = str << contents
end, 'w') do |fd|

def title_magic(sentence)
stop_words = %w[a an and the or for of nor] { |word, index| stop_words.include?(word) && index > 0 ? word : word.capitalize }.join(' ')