Categories
Blog

Why I quit using Google

So I was recently asked why I prefer to use free and open source software over more conventional and popular proprietary software and services.

A few years ago I was an avid Google user. I was deeply embedded in the Google ecosystem and used their products everywhere. I used Gmail for email, Google Calendar and Contacts for PIM, YouTube for entertainment, Google Newsstand for news, Android for mobile, and Chrome as my web browser.

I would upload all of my family photos to Google Photos and all of my personal documents to Google Drive (which were all in Google Docs format). I used Google Domains to register my domain names for websites where I would keep track of my users using Google Analytics and monetize them using Google AdSense.

I used Google Hangouts (one of Google’s previous messaging plays) to communicate with friends and family and Google Wallet (with debit card) to buy things online and in-store.

My home is covered with Google Homes (1 in my office, 1 in my bedroom, 1 in the main living area) which I would use to play music on my Google Play Music subscription and podcasts from Google Podcasts.

I have easily invested thousands of dollars into my Google account to buy movies, TV shows, apps, and Google hardware devices. This was truly the Google life.

Then one day, I received an email from Google that changed everything.

“Your account has been suspended”

Just the thing you want to wake up to in the morning. An email from Google saying that your account has been suspended due to a perceived Terms of Use violation. No prior warning. No appeals process. No number to call. Trying to sign in to your Google account yields an error and all of your connected devices are signed out. All of your Google data, your photos, emails, contacts, calendars, purchased movies and TV shows. All gone.

I nearly had a heart attack, until I saw that the Google account that had been suspended was in fact not my main personal Google account, but a throwaway Gmail account that I created years prior for a project. I hadn’t touched the other account since creation and forgot it existed. Apparently my personal Gmail was listed as the recovery address for the throwaway account and that’s why I received the termination email.

Although I was able to breathe a sigh of relief this time, the email was wake up call. I was forced to critically reevaluate my dependence on a single company for all the tech products and services in my life.

I found myself to be a frog in a heating pot of water and I made the decision that I was going to jump out.

Leaving Google

Today there are plenty of lists on the internet providing alternatives to Google services such as this and this. Although the “DeGoogle” movement was still in its infancy when I was making the move.

The first Google service I decided to drop was Gmail, the heart of my online identity. I migrated to Fastmail with my own domain in case I needed to move again (hint: glad I did, now I self host my email). Fastmail also provided calendar and contacts solutions so that took care of leaving Google Calendar and Contacts.

Here are some other alternatives that I moved to:

Migrating away from Google was not a fast or easy process. It took years to get where I am now and there are still several Google services that I depend on: YouTube and Google Home.

Eventually, my Google Home’s will grow old and become unsupported at which point hopefully the Mycroft devices have matured and become available for purchase. YouTube may never be replaced (although I do hope for projects like PeerTube to succeed) but I find the compromise of using only one or two Google services to be acceptable.

At this point losing my Google account due to a mistake in their machine learning would largely be inconsequential and my focus has shifted to leaving Amazon which I use for most of my shopping and cloud services.

The reason that I moved to mostly FOSS applications is that it seems to be the only software ecosystem where everything works seamlessly together and I don’t have to cede control to any single company. Alternatively I could have simply split my service usage up evenly across Google, Microsoft, Amazon, and Apple but I don’t feel that they would have worked as nicely together.

Overall I’m very happy with the open source ecosystem. I use Ubuntu with KDE on all of my computers and Android (no GApps) on my mobile phone. I’ve ordered the PinePhone “Brave Heart” and hope to one day be able to use it or one of its successors as a daily driver with Ubuntu Touch or Plasma Mobile.

I don’t want to give the impression that I exclusively use open source software either, I do use a number of proprietary apps including: Sublime Text, Typora, and Cloudron.

Categories
Blog

How to Easily Migrate Emails Between Accounts

If you’ve decided to move to another email provider it’s possible to take all of your old emails and folders with you. The easiest way I’ve found to do this is using the mail client Mozilla Thunderbird.

Thunderbird new account dialog. File > New > Existing mail account.

With Thunderbird installed sign into both your old and new emails accounts. This is provider dependent but in general if you are using a popular email service like Gmail, Yahoo, Outlook, etc. then Thunderbird can auto discover the SMTP endpoints. If you have two-factor authentication setup on your email account you may need to create an app password.

If you are unsure here are the instructions for a few popular services:

When you set up your old account make sure you set Thunderbird to download the entire email history not just the last few months.

Account settings for you can set how many emails Thunderbird will download. Edit > Account Settings.

Once you are signed in to both accounts you should see all of your emails and folders in the old account. You may want to wait for Thunderbird to finish downloading emails if necessary.

To move emails, simply select the inbox of your old mail account, use Ctrl + A to select all the emails, then drag them to the new inbox. You will also need to drag each of the folders from the old email account to the new one.

If you’d like to just move a couple of emails you can select them individually and drag them to the new email account.

Categories
Microblog

A Trip Through New York (1911)

I find this video just remarkable. It’s only been 109 years and yet things are so much different now. I wonder what things will be like in a hundred more years. With any luck I’ll live to see.

Categories
Microblog

Orchid VPN

I have previously mentioned how I felt that Tor should offer some way for users to pay for bandwidth on its network to incentivize more nodes to join. Well, today I found out about Orchid which is a decentralized VPN that allows users to do just that.

It’s basically a marketplace for bandwidth between clients and VPN providers. Anyone can set up a node and act as exit point. From what I’ve read it seems like exit nodes can even choose what type of content will go through them: torrents, email, specific websites, etc. can all be blocked or allowed. The app will automatically pick providers that support the type of content that you’re trying to access.

Given this dynamic I would imagine that different types of content will start to cost more. For example, bandwidth providers who allow torrents will charge a premium due to the increased legal risk. On the other hand, providers who only allow access to known safe sites like YouTube, Reddit, etc. would be much cheaper.

Orchid even supports multiple hops within the network just like Tor. There are a few concerns I have:

  • Since it’s decentralized there is no way to ban exit nodes for logging peoples traffic
  • Everything is done in OXT which is Orchid’s native currency on Ethereum so it’s kinda a pain to pay for the service
  • Orchid uses its own VPN protocol not a standard one like OpenVPN or Wireguard.

For now, I’m going to continue using Private Internet Access as my VPN, but Orchid is something I’ll keep my eye on.

Categories
Blog

Building a Robot Cat

I recently took the CICS Make course available at the University of Massachusetts Amherst with my colleague Hannah Dacayanan. At the end of the course we were required to build a final project that involved interacting with the physical world using computers.

We decided to build a robot cat. More specifically, a small motorized car that would follow a laser emitted from a laser pointer around a flat surface. This was inspired by popular viral online videos of real cats trying to pounce on red dots from laser pointers.

Hardware

The first thing that needed to be done was to get all the hardware and put it together. The main four components used were a car kit provided to us by the class, a Raspberry Pi 4, the Raspberry Pi camera, and the L298N motor driver.

The car kit with motors and L298N motor driver attached. Raspberry Pi 4 and camera sitting on top.

Hannah assembled the car kit over our Thanksgiving break and I attached the L298N driver and Raspberry Pi via the Pi’s onboard GPIO pins.

The L298N supports two motors simultaneously by connecting the motors positive and negative to the outputs shown on the diagram. The Raspberry Pi then controls those motors by supplying power to the pins labeled “A Enable” and “B Enable”. Then it can control the direction and speed of the motor by sending signals to the four pins between the enable pins. The top two control motor A and the bottom two motor B.

The direction of the motor is controlled by which pins are active and the speed of the motor is controlled by PWM on the active pin.

The Raspberry connected to the L298N motor driver via GPIO.

We used six GPIO pins from the Raspberry Pi to control the motors, the first three (11, 13, 15) for the left, and the last three (19, 21, 23) for the right.

At this point the hardware is completely done.

Software

For the cat to follow the laser, we needed some software on the Raspberry Pi to take a frame from the camera and tell us where in that frame the laser is, if at all.

There are two possible approaches to take: deep learning or hand-crafted algorithm. We opted to try the deep learning approach first.

Lasernet

To build a neural network that can both recognize and localize lasers in an image we decided to use TensorFlow. We didn’t want to have to label tons of training data and generating synthetic data yielded poor results, so instead we went to with a semi-supervised network. Lasernet takes as input a frame of video and outputs the likelihood of a laser existing in the image. In the network there is an attention mechanism used which is where we will get our localization properties form.

First let’s import everything

from tensorflow.keras.models import Model
from tensorflow.keras.layers import (
	Input,
	Conv2D,
	Activation,
	Reshape,
	Flatten,
	Lambda,
	Dense,
)
from tensorflow.keras.callbacks import ModelCheckpoint
import tensorflow.keras.backend as K

Then we can define some global settings that are useful for us to use in the network

# Settings
IMG_SHAPE = (128, 128, 3)
FILTERS = 16
DEPTH = 0
KERNEL = 8
BATCH_SIZE = 32

Now, we get to actually building the network. The first layer is the input for our image at the resolution specified in the settings (currently 128×128), then the second is a 2d convolutional layer using the number of filters and kernel specified in the settings.

‘same’ padding is used to keep the output of the convolutional layer the same shape as its input. This is important for when the network outputs a probability distribution for each pixel in the attention mechanism.

encoder_input = Input(shape=IMG_SHAPE)
encoder = Conv2D(FILTERS, KERNEL, activation='relu', padding='same', name='encoder_conv_0')(encoder_input)

You could optionally add more convolutional layers to the network with the following code

for i in range(DEPTH):
	encoder = Conv2D(FILTERS, KERNEL, activation='relu', padding='same', name=f'encoder_conv_{i + 1}')(encoder)

Although, in our production model settings the ‘DEPTH’ is set to zero, which just uses the first convolutional layer.

Next, we write the attention mechanism. Attention shows us where the neural network “looks” to determine whether there exists a laser in the image or not. In theory, the pixel with the highest attention weight should be where the laser is.

attention_conv = Conv2D(1, KERNEL, activation='relu', padding='same', name='attention_conv')(encoder)
attention_flatten = Flatten(name='attention_flatten')(attention_conv)
attention_softmax = Activation('softmax', name='attention_softmax')(attention_flatten)
attention_reshape = Reshape((IMG_SHAPE[0], IMG_SHAPE[1], 1), name='attention_reshape')(attention_softmax)
attention_output = Lambda(lambda x : x[0] * x[1], name='attention_output')([encoder_input, attention_reshape])

There is a lot going on in that code, but basically here’s what each layer does:

  • Attention Conv: takes an image or output of another convolutional layer and transforms it in some way that is learned by the neural network. In this case it will output a 128×128 matrix.
  • Attention Flatten: flattens the 128×128 matrix into a 16,384 item vector.
  • Attention Softmax: applies softmax activation to the vector and outputs another 16,384 item long vector with values between 0 and 1 that sum to 1. The i’th item of this vector is the weight of the i’th pixel in the input.
  • Attention Reshape: reshapes the softmax vector be the input resolution.
  • Attention Output: multiply the pixel weights by the pixels element wise. Pixels with higher weights will be preserved more while those with lower weights will not.

Now, we just need to get all of the outputs setup.

classifier1_flatten = Flatten(name='classifier1_flatten')(attention_reshape)
classifier2_flatten = Flatten(name='classifier2_flatten')(attention_output)
classifier1 = Lambda(lambda x : K.max(x, axis=-1), name='classifier1')(classifier1_flatten)
classifier2 = Dense(1, activation='sigmoid', name='classifier2')(classifier2_flatten)

Again here is a summary of each layer:

  • Classifier 1 Flatten: converts the attention weight matrix to a vector again (this is equivalent to the Attention Softmax layer)
  • Classifier 2 Flatten: converts the output of the attention mechanism to a vector
  • Classifier 1: Outputs the maximum probability of the attentions weights
  • Classifier 2: Uses a general feed-forward dense layer to learn how to “see” a laser.

Both classifiers will be trained to predict whether there is a laser in the input image and output either a 1 or 0. Classifier 1 forces the attention mechanism to produce a weight greater than 0.5 when there does exist a laser and product all weights less than 0.5 when there does not exist a laser. Classifier 2 is used for laser detection in production.

Finally, there is one last part of the network. We have it try to reconstruct the original image from just the attention weights. The idea here is that the easiest thing for the network to reconstruct should be the laser (since that’s the only thing in common between all images) which should encourage the attention mechanism to highlight that in the weights.

decoder = Conv2D(FILTERS, KERNEL, activation='relu', padding='same', name='decoder')(attention_reshape)
decoder = Conv2D(IMG_SHAPE[2], KERNEL, activation='relu', padding='same', name='decoder_output')(decoder)

The last thing needed is to compile the model. We use binary cross entropy for the two classifiers and mean squared error for the reconstruction loss. The optimizer is Adam.

model = Model(encoder_input, [classifier1, classifier2, decoder])
model.compile(
	loss=['binary_crossentropy', 'binary_crossentropy', 'mse'],
	loss_weights=[1000, 1000, 1],
	optimizer='adam',
	metrics=['accuracy']
)
Lasernet model architecture

Generating Training Data

I won’t go over the code for loading and reading the training data, but you can find the complete training script here. That said, there are a few interesting things we did in preprocessing.

Recall that, in production Lasernet is fed a continuous stream of frames from the Raspberry Pi camera. So what we do is take the average of the previous 10 frames and diff that with the current frame. Then send the diff the Lasernet instead of the raw frame.

This produces images where anything that is not really moving tends to be blacked out.

Train it

It’s training time! I trained Lasernet on my Nvidia GeForce GTX 1060 6GB for a day or so and here is the result.

The white dot is the network’s prediction for each frame, and the red circle is the moving average of predictions.

Catcarlib

Now, that the neural network is done, we needed some library for actually driving the car using the GPIO pins on the Raspberry Pi. For this purpose, we created catcarlib.

First we use the Raspberry Pi GPIO Python library to help us in controlling the pins on the board. Let’s import that.

import RPi.GPIO as GPIO 

Next, we need to initialize GPIO with the correct pins that are connected to the car. In our case those were:

  • Left Motor Power: 11
  • Left Motor Forward: 13
  • Left Motor Backward: 15
  • Right Motor Power: 23
  • Right Motor Forward: 21
  • Right Motor Backward: 19
channels = [
	{'label': 'LEFT_MOTOR_POWER', 'pin': 11},
	{'label': 'LEFT_MOTOR_FORWARD', 'pin': 13},
	{'label': 'LEFT_MOTOR_BACKWARD', 'pin': 15},
	{'label': 'RIGHT_MOTOR_POWER', 'pin': 23},
	{'label': 'RIGHT_MOTOR_FORWARD', 'pin': 21},
	{'label': 'RIGHT_MOTOR_BACKWARD', 'pin': 19},
]

GPIO.setmode(GPIO.BOARD)
GPIO.setup([i['pin'] for i in channels], GPIO.OUT, initial=GPIO.LOW)
state = [False for i in channels]

To review, we set the GPIO mode to use the pin numbers on the board. Then we setup each pin in channels and initialize it to 0. The state list is used to keep track of what the current state of each pin is.

Now it would be good to write some helper functions for things like resetting all the pins, get the index for each action in the state, and enabling pins.

def reset(state):
	for i in range(len(state)):
		state[i] = False

def getIndexFromLabel(label):
	for i, channel in enumerate(channels):
		if channel['label'] == label:
			return i

	return None

def commit(state):
	GPIO.output([i['pin'] for i in channels], state)

def enableByLabel(state, labels):
	for label in labels:
		state[getIndexFromLabel(label)] = True
  • reset: resets the states to all zero
  • getIndexFromLabel: gets the index of a particular action in the state list
  • commit: sends the current state to the pins
  • enableByLabel: enables a list of actions

At last, we can write the functions to actually move the car. Below is the function for moving the car forward. First it resets the state to a blank slate of all zeros. Then it enables the power for both motors and the forward pins. Finally, it commits the changes to the GPIO pins.

def forward():
	reset()
	enableByLabel([
		'LEFT_MOTOR_POWER',
		'LEFT_MOTOR_FORWARD',
		'RIGHT_MOTOR_POWER',
		'RIGHT_MOTOR_FORWARD',
	])
	commit()

The functions for left, right, and backwards can be written in much the same way. For left we want to left wheel to go forward, and the right wheel to go backward. Going right is the opposite of left, and going backward is the opposite of going forward.

Again you can see the full catcarlib.py on GitHub.

Putting everything together

So, now we have all the hardware ready, Lasernet trained, and catcarlib to control the car. Let’s see how it does.

To be honest with you, I was a little disappointed with the performance at this point. I was just hoping for more.

I tried a number of different things to improve the performance. Sanity checks to reduce false positives like checking the color of the pixel where the network predicted the laser to be, or the euclidean distance between the prediction and the images brightest pixel.

Ultimately nothing worked well enough to bring the performance to where I wanted it to be.

Catseelib

Since the neural network approach didn’t seem to be working out I decided try to come up with some handcrafted algorithm that can do it. I’ll call it catseelib.

The approach of taking the diff between the current frame and the average of the last 10 was pretty useful, so let’s start there. Then we can just take the pixel with the highest magnitude in the diff. That should be the laser if there is minimal background noise.

To make sure that the only thing in the diff was the laser, I pointed the camera straight down under the head of the cat. Let’s see how well that works.

Good enough, let’s give Hannah a try.

Categories
Microblog

Cloudron

Cloudron is fascinating piece of software I found out a few days ago. It makes it super easy to self-host a bunch of applications like Nextcloud, GitLab, Wallabag, etc.

They all have one-click installers and SSO with your Cloudron user accounts. Also, it supports encrypted backup to various cloud providers like Amazon S3, DigitalOcean Spaces, Google Cloud, etc.

The only two downsides I’ve found are that it costs $30/mo and isn’t FOSS.

I think I’m going to give it a try on a Linode server.

Categories
Microblog

Pamac > Pacman

I must say that I much prefer the Pamac CLI package manager to Pacman. It’s much more intuitive and I’m glad that Manjaro includes it by default.

Categories
Microblog

Arduino Pong

This is a simple pong game I created for fun for the CICS 290M Makerboard running Arduino software. The board is part of the CS Make course at UMass Amherst.

Below is the source code, it’s also available on GitHub.

// Imports
#include <Adafruit_SSD1306.h>

// Globals
#define UP_BUTTON 34
#define DOWN_BUTTON 0
#define RESET_BUTTON 35
#define BUZZER 17

// Game State
int score = 0;

// Paddle
int paddle_pos = 32;
int paddle_velocity = 1;
int paddle_height = 16;
int paddle_width = 4;

// Ball
volatile int ball_x = random(10, 80);
volatile int ball_y = random(16, 48);
volatile int ball_radius = 3;
volatile int ball_velocity_x = random(1, 3);
volatile int ball_velocity_y = random(1, 3);


Adafruit_SSD1306 lcd(128, 64); // create display object

// Callbacks
void IRAM_ATTR moveUp() {
  if (paddle_pos >= 0) {
    paddle_pos -= 1;
  }
}

void IRAM_ATTR moveDown() {
  if (paddle_pos + paddle_height <= 64) {
    paddle_pos += 1;
  }
}

void IRAM_ATTR restart() {
  if (gameOver()) {
    score = 0;
    ball_x = random(10, 80);
    ball_y = random(16, 48);
    ball_velocity_x = random(1, 3);
    ball_velocity_y = random(1, 3);
  }
}
void setup() {
  Serial.begin(9600);
  pinMode(BUZZER, OUTPUT);
  pinMode(UP_BUTTON, INPUT);
  pinMode(DOWN_BUTTON, INPUT);
  pinMode(RESET_BUTTON, INPUT);
  lcd.begin(SSD1306_SWITCHCAPVCC, 0x3C); // init
  lcd.setTextColor(WHITE);
  lcd.clearDisplay(); // clear software buffer
  lcd.display();
  attachInterrupt(RESET_BUTTON, restart, FALLING);
}

// Test if coordinates are out of bounds
boolean yIsOutOfBounds(int y) {return y > 63 || y < 0;}
boolean xIsOutOfBounds(int x) {return x > 127 || x < 0;}
boolean outOfBounds(int x, int y) {return xIsOutOfBounds(x) || yIsOutOfBounds(y);}
boolean gameOver() {return ball_x + ball_radius >= 127;}
boolean ballPaddleCollision(int x, int y) {
  return (x > 128 - paddle_width) && (y < paddle_pos + paddle_height && y > paddle_pos);
}

void drawPaddle(int x, int y, int width, int height) {
  lcd.fillRect(x, y, width, height, WHITE);
}

void ding() {
  ledcSetup(0, 5000, 8);
  ledcAttachPin(BUZZER, 0);
  ledcWriteTone(0, 500);
  delay(50); // 500Hz for 0.05 second
  ledcWriteTone(0, 0); // buzzer off
}

void drawBall() {
  lcd.fillCircle(ball_x, ball_y, ball_radius, BLACK);
  if (xIsOutOfBounds(ball_x + ball_radius) || xIsOutOfBounds(ball_x - ball_radius)) {
    ball_velocity_x *= -1;
    ding();
  }
  if (yIsOutOfBounds(ball_y + ball_radius) || yIsOutOfBounds(ball_y - ball_radius)) {ball_velocity_y *= -1;}
  if (ballPaddleCollision(ball_x + ball_radius, ball_y) || ballPaddleCollision(ball_x - ball_radius, ball_y)) {
    ball_velocity_x *= -1;
    ding();
    score++;
  }
  ball_x += ball_velocity_x;
  ball_y += ball_velocity_y;
  lcd.fillCircle(ball_x, ball_y, ball_radius, WHITE);
}

void loop() {
  // Check if Game is over
  if (gameOver()) {
    lcd.clearDisplay();
    lcd.setCursor(0,0);
    lcd.print("Score: " + String(score) + "\nPress bottom button \nto restart.");
    lcd.display();
    return;
  }
  // Check for button presses
  if (!digitalRead(UP_BUTTON)) {
    moveUp();
  }
  if (!digitalRead(DOWN_BUTTON)) {
    moveDown();
  }
  lcd.clearDisplay();
  drawPaddle(0, ball_y - (paddle_height / 2), paddle_width, paddle_height);
  drawPaddle(127 - paddle_width, paddle_pos + paddle_velocity, paddle_width, paddle_height); 
  drawBall();
  lcd.display();
}
Categories
Microblog

Signal Foundation Donation

I’ve decided to set up a monthly donation to the Signal Foundation. I’m not the biggest fan of Signal and I’ve been critical of the fact that it relies on central servers that can be shut off at any time.

But none of the peer-to-peer apps I’ve tried (such as Tox) seem competitive in this space. I need something I can give a non-technical friend and not have them feel like its a downgrade from WhatsApp.

For now I’m going to stick with Signal but I’ll keep an eye on some other promising projects like Matrix.

Categories
Microblog

Switching to Krita from GIMP

I’ve been using GIMP for the last couple of years, but since I’m on KDE Plasma I decided to give Krita a try.

It’s actually very capable and I’m going to try using it in lieu of GIMP for a little while.