Categories
Blog

Why I quit using Google

So I was recently asked why I prefer to use free and open source software over more conventional and popular proprietary software and services.

A few years ago I was an avid Google user. I was deeply embedded in the Google ecosystem and used their products everywhere. I used Gmail for email, Google Calendar and Contacts for PIM, YouTube for entertainment, Google Newsstand for news, Android for mobile, and Chrome as my web browser.

I would upload all of my family photos to Google Photos and all of my personal documents to Google Drive (which were all in Google Docs format). I used Google Domains to register my domain names for websites where I would keep track of my users using Google Analytics and monetize them using Google AdSense.

I used Google Hangouts (one of Google’s previous messaging plays) to communicate with friends and family and Google Wallet (with debit card) to buy things online and in-store.

My home is covered with Google Homes (1 in my office, 1 in my bedroom, 1 in the main living area) which I would use to play music on my Google Play Music subscription and podcasts from Google Podcasts.

I have easily invested thousands of dollars into my Google account to buy movies, TV shows, apps, and Google hardware devices. This was truly the Google life.

Then one day, I received an email from Google that changed everything.

“Your account has been suspended”

Just the thing you want to wake up to in the morning. An email from Google saying that your account has been suspended due to a perceived Terms of Use violation. No prior warning. No appeals process. No number to call. Trying to sign in to your Google account yields an error and all of your connected devices are signed out. All of your Google data, your photos, emails, contacts, calendars, purchased movies and TV shows. All gone.

I nearly had a heart attack, until I saw that the Google account that had been suspended was in fact not my main personal Google account, but a throwaway Gmail account that I created years prior for a project. I hadn’t touched the other account since creation and forgot it existed. Apparently my personal Gmail was listed as the recovery address for the throwaway account and that’s why I received the termination email.

Although I was able to breathe a sigh of relief this time, the email was wake up call. I was forced to critically reevaluate my dependence on a single company for all the tech products and services in my life.

I found myself to be a frog in a heating pot of water and I made the decision that I was going to jump out.

Leaving Google

Today there are plenty of lists on the internet providing alternatives to Google services such as this and this. Although the “DeGoogle” movement was still in its infancy when I was making the move.

The first Google service I decided to drop was Gmail, the heart of my online identity. I migrated to Fastmail with my own domain in case I needed to move again (hint: glad I did, now I self host my email). Fastmail also provided calendar and contacts solutions so that took care of leaving Google Calendar and Contacts.

Here are some other alternatives that I moved to:

Migrating away from Google was not a fast or easy process. It took years to get where I am now and there are still several Google services that I depend on: YouTube and Google Home.

Eventually, my Google Home’s will grow old and become unsupported at which point hopefully the Mycroft devices have matured and become available for purchase. YouTube may never be replaced (although I do hope for projects like PeerTube to succeed) but I find the compromise of using only one or two Google services to be acceptable.

At this point losing my Google account due to a mistake in their machine learning would largely be inconsequential and my focus has shifted to leaving Amazon which I use for most of my shopping and cloud services.

The reason that I moved to mostly FOSS applications is that it seems to be the only software ecosystem where everything works seamlessly together and I don’t have to cede control to any single company. Alternatively I could have simply split my service usage up evenly across Google, Microsoft, Amazon, and Apple but I don’t feel that they would have worked as nicely together.

Overall I’m very happy with the open source ecosystem. I use Ubuntu with KDE on all of my computers and Android (no GApps) on my mobile phone. I’ve ordered the PinePhone “Brave Heart” and hope to one day be able to use it or one of its successors as a daily driver with Ubuntu Touch or Plasma Mobile.

I don’t want to give the impression that I exclusively use open source software either, I do use a number of proprietary apps including: Sublime Text, Typora, and Cloudron.

Categories
Blog

How to Easily Migrate Emails Between Accounts

If you’ve decided to move to another email provider it’s possible to take all of your old emails and folders with you. The easiest way I’ve found to do this is using the mail client Mozilla Thunderbird.

Thunderbird new account dialog. File > New > Existing mail account.

With Thunderbird installed sign into both your old and new emails accounts. This is provider dependent but in general if you are using a popular email service like Gmail, Yahoo, Outlook, etc. then Thunderbird can auto discover the SMTP endpoints. If you have two-factor authentication setup on your email account you may need to create an app password.

If you are unsure here are the instructions for a few popular services:

When you set up your old account make sure you set Thunderbird to download the entire email history not just the last few months.

Account settings for you can set how many emails Thunderbird will download. Edit > Account Settings.

Once you are signed in to both accounts you should see all of your emails and folders in the old account. You may want to wait for Thunderbird to finish downloading emails if necessary.

To move emails, simply select the inbox of your old mail account, use Ctrl + A to select all the emails, then drag them to the new inbox. You will also need to drag each of the folders from the old email account to the new one.

If you’d like to just move a couple of emails you can select them individually and drag them to the new email account.

Categories
Blog

Building a Robot Cat

I recently took the CICS Make course available at the University of Massachusetts Amherst with my colleague Hannah Dacayanan. At the end of the course we were required to build a final project that involved interacting with the physical world using computers.

We decided to build a robot cat. More specifically, a small motorized car that would follow a laser emitted from a laser pointer around a flat surface. This was inspired by popular viral online videos of real cats trying to pounce on red dots from laser pointers.

Hardware

The first thing that needed to be done was to get all the hardware and put it together. The main four components used were a car kit provided to us by the class, a Raspberry Pi 4, the Raspberry Pi camera, and the L298N motor driver.

The car kit with motors and L298N motor driver attached. Raspberry Pi 4 and camera sitting on top.

Hannah assembled the car kit over our Thanksgiving break and I attached the L298N driver and Raspberry Pi via the Pi’s onboard GPIO pins.

The L298N supports two motors simultaneously by connecting the motors positive and negative to the outputs shown on the diagram. The Raspberry Pi then controls those motors by supplying power to the pins labeled “A Enable” and “B Enable”. Then it can control the direction and speed of the motor by sending signals to the four pins between the enable pins. The top two control motor A and the bottom two motor B.

The direction of the motor is controlled by which pins are active and the speed of the motor is controlled by PWM on the active pin.

The Raspberry connected to the L298N motor driver via GPIO.

We used six GPIO pins from the Raspberry Pi to control the motors, the first three (11, 13, 15) for the left, and the last three (19, 21, 23) for the right.

At this point the hardware is completely done.

Software

For the cat to follow the laser, we needed some software on the Raspberry Pi to take a frame from the camera and tell us where in that frame the laser is, if at all.

There are two possible approaches to take: deep learning or hand-crafted algorithm. We opted to try the deep learning approach first.

Lasernet

To build a neural network that can both recognize and localize lasers in an image we decided to use TensorFlow. We didn’t want to have to label tons of training data and generating synthetic data yielded poor results, so instead we went to with a semi-supervised network. Lasernet takes as input a frame of video and outputs the likelihood of a laser existing in the image. In the network there is an attention mechanism used which is where we will get our localization properties form.

First let’s import everything

from tensorflow.keras.models import Model
from tensorflow.keras.layers import (
	Input,
	Conv2D,
	Activation,
	Reshape,
	Flatten,
	Lambda,
	Dense,
)
from tensorflow.keras.callbacks import ModelCheckpoint
import tensorflow.keras.backend as K

Then we can define some global settings that are useful for us to use in the network

# Settings
IMG_SHAPE = (128, 128, 3)
FILTERS = 16
DEPTH = 0
KERNEL = 8
BATCH_SIZE = 32

Now, we get to actually building the network. The first layer is the input for our image at the resolution specified in the settings (currently 128×128), then the second is a 2d convolutional layer using the number of filters and kernel specified in the settings.

‘same’ padding is used to keep the output of the convolutional layer the same shape as its input. This is important for when the network outputs a probability distribution for each pixel in the attention mechanism.

encoder_input = Input(shape=IMG_SHAPE)
encoder = Conv2D(FILTERS, KERNEL, activation='relu', padding='same', name='encoder_conv_0')(encoder_input)

You could optionally add more convolutional layers to the network with the following code

for i in range(DEPTH):
	encoder = Conv2D(FILTERS, KERNEL, activation='relu', padding='same', name=f'encoder_conv_{i + 1}')(encoder)

Although, in our production model settings the ‘DEPTH’ is set to zero, which just uses the first convolutional layer.

Next, we write the attention mechanism. Attention shows us where the neural network “looks” to determine whether there exists a laser in the image or not. In theory, the pixel with the highest attention weight should be where the laser is.

attention_conv = Conv2D(1, KERNEL, activation='relu', padding='same', name='attention_conv')(encoder)
attention_flatten = Flatten(name='attention_flatten')(attention_conv)
attention_softmax = Activation('softmax', name='attention_softmax')(attention_flatten)
attention_reshape = Reshape((IMG_SHAPE[0], IMG_SHAPE[1], 1), name='attention_reshape')(attention_softmax)
attention_output = Lambda(lambda x : x[0] * x[1], name='attention_output')([encoder_input, attention_reshape])

There is a lot going on in that code, but basically here’s what each layer does:

  • Attention Conv: takes an image or output of another convolutional layer and transforms it in some way that is learned by the neural network. In this case it will output a 128×128 matrix.
  • Attention Flatten: flattens the 128×128 matrix into a 16,384 item vector.
  • Attention Softmax: applies softmax activation to the vector and outputs another 16,384 item long vector with values between 0 and 1 that sum to 1. The i’th item of this vector is the weight of the i’th pixel in the input.
  • Attention Reshape: reshapes the softmax vector be the input resolution.
  • Attention Output: multiply the pixel weights by the pixels element wise. Pixels with higher weights will be preserved more while those with lower weights will not.

Now, we just need to get all of the outputs setup.

classifier1_flatten = Flatten(name='classifier1_flatten')(attention_reshape)
classifier2_flatten = Flatten(name='classifier2_flatten')(attention_output)
classifier1 = Lambda(lambda x : K.max(x, axis=-1), name='classifier1')(classifier1_flatten)
classifier2 = Dense(1, activation='sigmoid', name='classifier2')(classifier2_flatten)

Again here is a summary of each layer:

  • Classifier 1 Flatten: converts the attention weight matrix to a vector again (this is equivalent to the Attention Softmax layer)
  • Classifier 2 Flatten: converts the output of the attention mechanism to a vector
  • Classifier 1: Outputs the maximum probability of the attentions weights
  • Classifier 2: Uses a general feed-forward dense layer to learn how to “see” a laser.

Both classifiers will be trained to predict whether there is a laser in the input image and output either a 1 or 0. Classifier 1 forces the attention mechanism to produce a weight greater than 0.5 when there does exist a laser and product all weights less than 0.5 when there does not exist a laser. Classifier 2 is used for laser detection in production.

Finally, there is one last part of the network. We have it try to reconstruct the original image from just the attention weights. The idea here is that the easiest thing for the network to reconstruct should be the laser (since that’s the only thing in common between all images) which should encourage the attention mechanism to highlight that in the weights.

decoder = Conv2D(FILTERS, KERNEL, activation='relu', padding='same', name='decoder')(attention_reshape)
decoder = Conv2D(IMG_SHAPE[2], KERNEL, activation='relu', padding='same', name='decoder_output')(decoder)

The last thing needed is to compile the model. We use binary cross entropy for the two classifiers and mean squared error for the reconstruction loss. The optimizer is Adam.

model = Model(encoder_input, [classifier1, classifier2, decoder])
model.compile(
	loss=['binary_crossentropy', 'binary_crossentropy', 'mse'],
	loss_weights=[1000, 1000, 1],
	optimizer='adam',
	metrics=['accuracy']
)
Lasernet model architecture

Generating Training Data

I won’t go over the code for loading and reading the training data, but you can find the complete training script here. That said, there are a few interesting things we did in preprocessing.

Recall that, in production Lasernet is fed a continuous stream of frames from the Raspberry Pi camera. So what we do is take the average of the previous 10 frames and diff that with the current frame. Then send the diff the Lasernet instead of the raw frame.

This produces images where anything that is not really moving tends to be blacked out.

Train it

It’s training time! I trained Lasernet on my Nvidia GeForce GTX 1060 6GB for a day or so and here is the result.

The white dot is the network’s prediction for each frame, and the red circle is the moving average of predictions.

Catcarlib

Now, that the neural network is done, we needed some library for actually driving the car using the GPIO pins on the Raspberry Pi. For this purpose, we created catcarlib.

First we use the Raspberry Pi GPIO Python library to help us in controlling the pins on the board. Let’s import that.

import RPi.GPIO as GPIO 

Next, we need to initialize GPIO with the correct pins that are connected to the car. In our case those were:

  • Left Motor Power: 11
  • Left Motor Forward: 13
  • Left Motor Backward: 15
  • Right Motor Power: 23
  • Right Motor Forward: 21
  • Right Motor Backward: 19
channels = [
	{'label': 'LEFT_MOTOR_POWER', 'pin': 11},
	{'label': 'LEFT_MOTOR_FORWARD', 'pin': 13},
	{'label': 'LEFT_MOTOR_BACKWARD', 'pin': 15},
	{'label': 'RIGHT_MOTOR_POWER', 'pin': 23},
	{'label': 'RIGHT_MOTOR_FORWARD', 'pin': 21},
	{'label': 'RIGHT_MOTOR_BACKWARD', 'pin': 19},
]

GPIO.setmode(GPIO.BOARD)
GPIO.setup([i['pin'] for i in channels], GPIO.OUT, initial=GPIO.LOW)
state = [False for i in channels]

To review, we set the GPIO mode to use the pin numbers on the board. Then we setup each pin in channels and initialize it to 0. The state list is used to keep track of what the current state of each pin is.

Now it would be good to write some helper functions for things like resetting all the pins, get the index for each action in the state, and enabling pins.

def reset(state):
	for i in range(len(state)):
		state[i] = False

def getIndexFromLabel(label):
	for i, channel in enumerate(channels):
		if channel['label'] == label:
			return i

	return None

def commit(state):
	GPIO.output([i['pin'] for i in channels], state)

def enableByLabel(state, labels):
	for label in labels:
		state[getIndexFromLabel(label)] = True
  • reset: resets the states to all zero
  • getIndexFromLabel: gets the index of a particular action in the state list
  • commit: sends the current state to the pins
  • enableByLabel: enables a list of actions

At last, we can write the functions to actually move the car. Below is the function for moving the car forward. First it resets the state to a blank slate of all zeros. Then it enables the power for both motors and the forward pins. Finally, it commits the changes to the GPIO pins.

def forward():
	reset()
	enableByLabel([
		'LEFT_MOTOR_POWER',
		'LEFT_MOTOR_FORWARD',
		'RIGHT_MOTOR_POWER',
		'RIGHT_MOTOR_FORWARD',
	])
	commit()

The functions for left, right, and backwards can be written in much the same way. For left we want to left wheel to go forward, and the right wheel to go backward. Going right is the opposite of left, and going backward is the opposite of going forward.

Again you can see the full catcarlib.py on GitHub.

Putting everything together

So, now we have all the hardware ready, Lasernet trained, and catcarlib to control the car. Let’s see how it does.

To be honest with you, I was a little disappointed with the performance at this point. I was just hoping for more.

I tried a number of different things to improve the performance. Sanity checks to reduce false positives like checking the color of the pixel where the network predicted the laser to be, or the euclidean distance between the prediction and the images brightest pixel.

Ultimately nothing worked well enough to bring the performance to where I wanted it to be.

Catseelib

Since the neural network approach didn’t seem to be working out I decided try to come up with some handcrafted algorithm that can do it. I’ll call it catseelib.

The approach of taking the diff between the current frame and the average of the last 10 was pretty useful, so let’s start there. Then we can just take the pixel with the highest magnitude in the diff. That should be the laser if there is minimal background noise.

To make sure that the only thing in the diff was the laser, I pointed the camera straight down under the head of the cat. Let’s see how well that works.

Good enough, let’s give Hannah a try.

Categories
Blog

How to Fix OnePlus 3 Camera Focus

A few weeks ago my OnePlus 3 camera stopped focusing properly meaning I could no longer take photos of text or scan QR codes. After doing a bit of research on the OnePlus community forums I found that this was likely an issue with the hardware stablization of the camera getting stuck.

If your stabilizer is stuck you may be able to fix it by just giving your phone a good shake. However, the solution for me was to take a refrigerator magnet and move it over the external camera module several times (you should hear the camera moving back and forth inside).

If the magnet solution does not work it seems like some people have had success opening their phone and placing a metal object like paperclip or staple next to the camera module. Below is a video I found on YouTube of someone doing so. That said, I believe that opening your phone will void your warranty so if you are near a OnePlus service center I would recommend you give them a visit to see if they can fix your problem first.

Categories
Blog

How to Break Audible DRM

After purchasing audiobooks on Audible you may want to store the files on your computer in case Amazon decides to pull the books later on. Audible allows you to download encrypted copies of your books from your account library.

Clicking on the “Download” link for any audiobook will download a .aax file to your computer. This file contains audio data that has been encrypted using a 4-byte key unique to your Audible account. Because the key is so short it is trivial to break it using brute force and there is plenty of software available specifically for that purpose. In this blog post, I’ll be covering two ways to decrypt the file.

OpenAudible

OpenAudible a free open-source graphical program available for Linux, Windows, and macOS. It’s specifically designed to remove DRM from your Audible files and hides a lot of the complexity.

Once you install OpenAudible from its website you can drag and drop the .aax files you downloaded from Audible into it. They will show up in a list at the bottom of the window.

The book 1984 loaded into OpenAudible

With your audiobooks loaded select them (Ctrl + A) and right-click to select “Convert to MP3”.

Right-clicking and selecting Convert to MP3 on 1984

OpenAudible will convert each of your audiobooks to a DRM-free mp3 file and save them in the ~/OpenAudible folder on your computer. If you can’t find the mp3 files then right-click one of the books and select “Show MP3”.

One nice thing about OpenAudible over the FFMPEG method is that the book’s metadata (author, reader, publisher, etc.) will be preserved in the resulting mp3 file.

FFMPEG

ffmpeg is a popular free and open-source command line utility for processing video and audio. It can decrypt the Audible DRM but requires you to input the specific 4-byte encryption key unique to your Audible account. You can brute force your downloaded .aax files (you only need to get the key from one and it will work for the others) using this RainbowCrack plugin.

  1. First, download the plugin and rainbow tables from GitHub. $ git clone https://github.com/inAudible-NG/tables.git
  2. Next, use ffmpeg to get the SHA1 checksum of one of your audiobook files. $ ffprobe audiobook.aax ... [mov,mp4,m4a,3gp,3g2,mj2 @ 0x1dde580] [aax] file checksum == 999a6ab8... [mov,mp4,m4a,3gp,3g2,mj2 @ 0x1dde580] [aax] activation_bytes option is missing!
  3. Finally, crack the encryption key using the RainbowCrack plugin you downloaded. $ ./tables/rcrack . -h 999a6ab8...

Once you’ve gotten your key from RainbowCrack you can use it to convert your .aax files to mp3s using ffmpeg like so (replace XXXX with your key):

ffmpeg -activation_bytes XXXX -i audiobook.aax audiobook.mp3

Unfortunately, this does not appear to migrate the metadata to the new mp3 files created like the OpenAudible approach does.

Categories
Blog

Epidemiology with Python

I was recently tasked with solving a fun epidemiology puzzle for one of my university classes. Below is an excerpt from the assignment describing the scenario.

11 people get sick enough to go to a local hospital with severe diarrhea and vomiting that lasts four days or so in each patient. All the patients turn out to all have the same strain of norovirus.

It turned out that they all knew each other and over the summer had been sharing produce from their gardens. The nurse’s hypothesis was that one person had norovirus, and had transmitted the virus to others on the food. She made a list, numbered the patients, starting with the patient that had first shared, and who they had shared with. It turned out a total of 16 people had shared produce, so she contacted the additional people who had not gotten sick, and asked them who they had shared produce with and when. In the end, she came up with the list below. So, patient 1 first shared vegetables with patient 12, then with patient 14. Patient 2 first shared vegetables with patient 5, then with patient 15, and so on. And patient 1 never got ill, while patient 2 did. Any time that two people come in contact with each other, the virus can move either way. For example, it would be possible for patient 2 to have infected patient 5, or patient 5 to infect patient 2.

After studying the list, she said, “I know who started this!” She asked that patient where they had been recently and it turned out they’d been on a cruise ship that had had a severe outbreak of norovirus! Based on her data, which patient was the one who went on the cruise and started the epidemic?

Data

Below is the dataset of patients and who they met with.

PatientMeetingsSick
112,14FALSE
25,15TRUE
36,16TRUE
41,7,11TRUE
510,3,16FALSE
613,2FALSE
72,8TRUE
83,10TRUE
915,5TRUE
109TRUE
1114TRUE
1213,15FALSE
1316,3TRUE
149TRUE
1516,5FALSE
169TRUE

Rules

So based on the above passage we can derive some simplified rules to use in solving the puzzle

  1. Meetings happen sequentially going left to right
  2. Rows are in chronological order going top to bottom
  3. We don’t move onto the next row until all the meetings of the current row are complete
  4. Each meeting has only two people
  5. When two people meet the disease can go either direction

Solution

Theory

To solve this we need to find an algorithm that can identify patient zero based on their interactions with others. At first, I thought about using a graph-based approach to model each meeting, but the temporal nature of the data makes that untenable. Instead, I opted for a much simpler and intuitive approach that takes advantage of the ordering.

Since we know the precise sequence in which meetings occurred and we know that each meeting contains only two people we can generate a list of interaction tuples from the dataset. For example, 1 meets with 12, then 14, and then 2 meets with 5. So we could have a list like so:

[(1, 12), (1, 14), (2, 5)]

Once we have our sequential list of interactions we can iterate through them to simulate the effect of any given individual being patient zero. Then it’s just a game of trial and error trying out different possible patients. If we find a contradiction in our simulation based on the data we were given (ie. someone gets sick in the simulation but was healthy in the table) then we know that our guess for patient zero was wrong and can move on to the next one. But if we get all the way to the end of the simulation and everyone who was supposed to get sick is sick and everyone who was supposed to be healthy is healthy then we found our culprit.

Code

So now that we have a game-plan, we just need to code it up and find out who got everyone sick.

We’ll be using the Pandas python library for working with our table.

import pandas as pd

I’ve placed the table into a CSV file called data.csv which we’ll open as a Pandas DataFrame.

# Load the dataset into a DataFrame
df = pd.read_csv('data.csv')

We need to get the list of interactions in chronological order from the table. To do this I’ll use a Python generator to iterate over the rows and for each row I’ll split the meetings up and yield them.

def get_interactions():
    # Iterate through the rows of the DataFrame
    for index, row in df.iterrows():
        # Get each meeting in order
        for meeting in row['Meetings'].split(','):
            # Yield the interaction
            yield {row['Patient'], int(meeting)}

Great, so far so good. All we have left is the actual simulation to write. For this, what we’ll do is keep a list of patients who are sick in the simulation. It will start with just our guess for patient zero and grow as they interact with others.

When we iterate through the interactions there are three possible situations that can happen:

  1. The interaction has no sick people in it
  2. The interaction has one sick person and one healthy person
  3. The interaction has two sick people

If the interaction has no sick people or two sick people then we just move along to the next one. But if it has one sick person and one healthy person then we need to make the healthy person “sick” by adding them to the sick_people set. However, before doing that we check with our real data to see if the healthy person was recorded as being sick. If they were then they get added and we keep going, but if they are supposed to be healthy then we know that our hypothesis was wrong and can return False.

Finally, if we make it through all of the iterations without invalidating our hypothesis then it must be true and we will return.

# Test if our patient zero hypothesis is correct
def test_hypothesis(id, interactions):
    # A set of sick people in the simulation
    # starts with just patient zero
    sick_people = {id}
    # Iterate over the interactions in
    # chronological order
    for interaction in interactions:
        # Check if the interaction has at least
        # one sick person in it
        if sick_people.intersection(interaction):
            # If there is a sick person then
            # check if everyone in this interaction
            # was supposed to get sick.
            for person in interaction:
                # If they were then add them to the set
                if df[df['Patient'] == person]['Sick'].bool():
                    sick_people.add(person)
                # If they weren't then we are done and
                # can return False
                else:
                    return False

    return True

Alright, well that’s it! Just add a few more lines of code to run our functions and let’s see who it was.

# Get list of interactions
interactions = list(get_interactions())
# Iterate through the 16 candidates
for candidate in range(1, 17):
    # Check if our guess is correct
    if test_hypothesis(candidate, interactions):
        # Yay! We found them.
        print('It was {}!'.format(candidate))
        break

Run it!

(env) kyle@IntelNuc:~/Code/Python/Virus Spread$ python who_did_it.py 
It was 7!

Well, it looks like it was patient #7 who got everyone sick. Mystery solved.

I decided to write another script using similar code to produce a tree diagram of the infection which is pictured below. As you can see the norovirus does indeed start with patient #7 and moves to all of the other sick people from there. I encourage you to follow the path of the tree through the table to confirm the results for yourself.

Categories
Blog

Exploration of the Game of Life

Basic Exploration

In-depth Exploration

  1. Can you find a pattern that returns to its starting point after more than two time steps?
  2. What’s the longest you can see a pattern go without repeating a configuration?

To answer these questions I decided to build my own implementation of Conway’s Game of Life in Python to brute force all possible starting positions.

"""
Kyle's Game of Life Implementation
1) live cells die if they have 0, 1, or 4+ neighbors
2) empty cells have a birth if they have exactly three neighbors
"""

import numpy as np

# Create a blank board
board = np.zeros((5, 5))

def iterate(board):
	"""
	This function takes the current board state and returns the next state.
	"""
	conv_board = np.zeros((7, 7))
	conv_board[1:6, 1:6] = board
	conv = np.lib.stride_tricks.as_strided(
		conv_board, 
		(5, 5, 3, 3), # view shape
		(56, 8, 56, 8) # strides
	)
	
	# The new board
	b = np.zeros((5, 5))
	for i in range(5):
		for j in range(5):
			# Count the number of neighbor live cells
			if conv[i, j, 1, 1] == 1:
				# Subtract itself from total count
				b[i, j] = conv[i, j].sum() - 1
			else:
				b[i, j] = conv[i, j].sum()

	# Cells with 0, 1, or 4+ die
	b[np.any([b <= 1, b >= 4], axis=0)] = 0
	# Living cells with 2 neighbors get to keep living
	b[np.all([b == 2, board == 1], axis=0)] = 1
	# Dead cells with 2 neighbors stay dead
	b[np.all([b == 2, board == 0], axis=0)] = 0
	# All cells with 3 neighbors live
	b[b == 3] = 1
	# Return the new board state
	return b

if __name__ == '__main__':
	while input('Continue? [y/n] ') == 'y':
		print(board)
board = iterate(board)

Results

It took approximately two hours to play all 225 possible starting positions. There were 300,477,379 total steps taken with an average of 8.95 steps per game. The game with the longest period was 39 steps.

The Longest Game


Categories
Blog

Introducing the AskSteem Search Engine

This article was originally published on Steemit.com

Asksteem

Over the past month, I’ve been building a new search engine that indexes the steem blockchain. It’s currently live at asksteem.com. The goal of AskSteem is to provide a reliable, powerful, and fast search engine that is optimized for steem. In this post, I’d like to cover some of the features that are available.

Query Syntax

There are many different ways that you can query the AskSteem index. I’ve created a video demonstrating each of them, but you may also read their descriptions and examples below.

Keyword/Phrase Search

Like many other search engines, you can search for general phrases and terms. AskSteem will try its best to find the document that is most relevant to your query based on our ranking algorithm. Example Queries: Tip: Click the example to go to that query on AskSteem How to buy bitcoin What is steem Markdown tutorial

Exact Search

Putting a query into quotes requests that AskSteem only returns documents that have exactly that phrase in that order. Example Queries: "How to buy bitcoin" "What is steem" "Markdown tutorial"

Tag Search

AskSteem allows you to filter posts by tag. Example Queries: tags:life tags:steemit

Author Search

You can filter posts by the author too. Example Queries: author:thekyle author:abit author:steemit

Creation Date Search

AskSteem provides a highly flexible and powerful date search tool for posts. You can search by exact date or by date range. Dates must be in the form of YYYY-MM-DD. Example Queries: Search for all posts posted on June 2, 2017 created:2017-06-02 Search for all posts posted between May 1, 2017, and May 31, 2017 created:[2017-05-01 TO 2017-05-31]

Search by Number of Votes/Comments

Similar to dates AskSteem has another set of robust tools that allow searches based on the number of upvotes or comments a post receives. Example Queries: Posts with 150 votes: net_votes:150 Posts with between 100 and 150 votes net_votes:[100 TO 150] Posts with 50 comments: children:50 Posts with between 40 and 50 comments: children:[40 TO 50] Posts with more than 50 comments: children:>50 or less than 50: children:<50 this also works with votes, less than or equal to 10 votes: net_votes:<=10

Searches with Boosts

You can prioritize certain parts of your query with boosts. These are indicated by placing a ^n at the end of a term, where n is the power you want to boost that part of the query to. Example Queries: Give the term bitcoin a boost of two: I really want posts to have the term bitcoin^2 in them. Give the term mine a boost of two, and term steem a boost of three: How to mine^2 steem^3

Inclusive/Exclusive Search

You can indicate whether you want documents to contain certain terms by placing a + or a - in front of the term. Example Queries: Find documents about mining but not bitcoin: cryptocurrency +mining -bitcoin

Wildcard Search

You can use the wildcard expressions of ? for a single character, or * to match any number of characters. Example Queries: How to mine any cryptocurrency: How to mine *

Boolean Search

AskSteem supports any combination of the previously mentioned search types in a single powerful query. This uses boolean values of AND, OR, and NOT, along with parenthesis to separate statements.
Example Queries: Posts tagged with asksteem by @thekyle: tags:asksteem AND author:thekyle Posts with between 50 and 100 comments that have more than 500 upvotes and that are tagged with ‘bitcoin’ or have the term bitcoin in the document: (bitcoin OR tags:bitcoin) AND (net_votes:>500 AND children:(>50 AND <100)) Posts created on June 2, 2017, with 100 or more upvotes but less than 10 comments: created:2017-06-02 AND net_votes:>=100 AND children:<10

Developers

https://www.youtube.com/watch?v=aiqasuhUPXU Because AskSteem integrates directly into the steem blockchain it can read metadata directly from posts and use that data when performing queries and displaying results. We encourage developers to add AskSteem compatible metadata to their posts so that we can show links to your application in our search results. The full documentation can be found at asksteem.com/developers, however, in this post I will summarize the most important tags.

TagDescriptionExample
domainThe domain name or web address that your application is hosted on.example.com
locatorThe path to reach the post on the domain relative to the root./CATEGORY/@AUTHOR/PERMLINK
protocolEither ‘http’ or ‘https’ if not provided then http will be used by defaulthttps

If none of the above metadata is provided then AskSteem will link to steemit.com for all posts by default, however, it is assumed that the platform creating the content will have the best interface for viewing it, so we would rather link there.

The domain and locator tags are required for custom linking to work, however, the protocol tag is optional and will default to http.

  • The domain tag should be the domain name that your web-based steem application is hosted on and is subdomain sensitive (so if your hosting on www subdomain then put that).
  • The locator should be the permalink to that particular post in your applications URL structure, also notice the leading forward slash, this is required.

The final URL that we point to will be generated by concatenating the domain and locator together with the protocol at the beginning which will be http unless otherwise specified.

Additionally, if you are building an application on the steem blockchain and need a search API please email us at contact@asksteem.com, we are able to query custom metadata and make various other customizations to the ranking algorithm to support your use case.

Funding

The harsh reality is that search engines are expensive to run and that adding new features and improving performance are difficult if the basic funding needs of the project are not covered. AskSteem currently costs me about $100/month to run, and that number will, of course, continue to increase as steem grows and the index size increases.

Ideally, my goal is the have those costs covered through upvotes from the steem community and to use any extra money for adding new features and scaling the search infrastructure to meet demand. If this works then AskSteem will be the first search engine in the world to use a cryptocurrency based revenue model, instead of selling advertising.

Thank you for your time, and happy searching!

Categories
Blog

How to Make Phone Calls on an iPod

The New iPod Touch from Apple is a great way of getting an iOS 8/9 experience without having to pay the hefty price of the iPhone. However while cheaper than the iPhone it does come with a few drawbacks, one of which is that it cannot be used to make Phone calls or send text messages (by default). However, with today’s technologies, it’s pretty easy for even the most non-technical people to start making phone calls from their new iPod.

Step #1 – Create a Google Account

If you’ve already got a Google account this can be skipped however the service will be using is owned by Google and requires a free Google account to start using it. You can register here.

Step #2 – Get Google Voice

Next in your browser while signed into your Google account visit voice.google.com and run through the basic setup process.

 Google Voice Setup:

  1. Choose I want a New Number from the Initial Setup or if you prefer you can use your current number.
  2. Add any of your current phones as a Forwarding phone. (What Phone You Choose doesn’t matter for this tutorial)
  3. Enter Your Zipcode, or a Keyword to search for available numbers to choose from, then a list should appear and you can pick a number that you like.

Step #3 – Edit Basic Settings (Optional)

Once you set up your Google Voice account and select your number you can change specific settings such as your voicemail and pins by clicking on the gear icon in the upper right corner of the user interface and selecting settings.

Step #4 – Install Google Voice App

Next, you just have to install the Google Voice App on your iPod and connect it to your Google Account. Once you complete the Application setup your ready to send and receive text messages, and phone calls from your iPod for free. The number that you give to people to call you is the same number you choose when you set up Google Voice. If you forgot your number then you can find it easily by logging into Google Voice and finding the listed phone number in the left column.

Categories
Blog

Top 5 Dedicated Hosting Providers

1. Gigapros Web Hosting

While Gigapros do have Shared and VPS hosting, where they really shine is in their Dedicated Servers. They have a very different way of operating that differentiates them for many other web-hosts and puts them as number one on this list. The main one being how they allow you to pick exactly the specs you need for your server, including RAM, CPU, and any Operating System from CentOS to Windows. On top of that, you can add, some extremely competitive pricing such as a 64 GB of RAM, and 3.8 GHz Processor server coming in at just 119 USD a month.

2. BlueHost

I like to think of Bluehost as the Apple of the web hosting world. While they do shine in some areas the main that I find really unique are support and simplicity. To address the first one, in all the times I’ve ever had an Issue with any of my Bluehost servers I never had to wait for more than 5 minutes before getting into their real-time chat with someone who can actually help me, instead of some low-level salesperson. The simplicity part is not necessarily as big of a deal with the Pro web-designers and Entrepreneurs as with the newbies, but its always nice to have everything you need available in one place (such as Domains, Hosting, etc.). Another quick side note is that for some reason I’ve never had to wait more than 5 minutes for a domain I purchased through Bluehost to propagate for their servers.

3. DreamHost

I’ve always had mixed feelings about Dreamhost based on my first experiences with them, however as of the time I’m writing this review I’m glad to say that they’ve really become one of the more competitive web-hosts. While they really don’t offer any of the simplicity of Bluehost or the low prices of Gigapros if you’re looking for a good mix between Price, Support, and Reliability then Dreamhost is defiantly a good candidate to look into.

4. HostGator

HostGator is another company similar to BlueHost as in they put customer satisfaction, and support before anything else. While neither their VPS or Dedicated servers give you as much bang for your buck as Gigapros, they still offer very powerful Dedicated servers. While you shouldn’t expect to go starting your own social network on HostGator, you can certainly get some really large WordPress or other basic CMS sites going. HostGator also similar to Gigapros offers both Linux and Windows server hosting.

5. GoDaddy

I’d be amiss not to at least talk about GoDaddy in this list considering they are probably the most popular web hosting company. Although to be Frank nearly every experience I’ve had with actually hosting with GoDaddy has been just terrible, and I can’t recommend you buy anything but Domain Names from them.