Albion Online Farming Script: Is Image-Based Automation Approach Worth Trying?

I’m a big fan of automating boring stuff on my computer. I’ve written programs in Python and JS to put orders on online shopping website, transfer data between multiple excel files and extract information from PDFs etc. etc. In the year of 2020, it’s almost guaranteed that you can find a Python library that offers full/partial access to the APIs of the specific program you’d like to automate – Selenium supports multiple languages for webpage automation/testing; pywinauto is good for automating native Windows apps; openpyxl allows you to read&write excel files without even installing MS Excel (no Macro support yet), and the list goes on. Despite the ease of use, these libraries are tied to the specific tasks they were designed to work with and therefore barely allow cross-program automation or general-purpose automation. Complex programs that intentionally close their APIs, i.e. video games, also leave little space for these libraries to be utilized. Image-based automation method, which gathers information directly from user screen, imitates how human-beings process information and thus is a promising approach for general-purpose automation.

I’ve been playing an MMORPG called Albion Online for few months. It features a player-driven economy system where most in-game items are produced, traded and consumed by players with very limited intervention from the game itself. One way to earn game currency is by farming crops. Players can buy seeds from the market, sow the seeds on their personal islands, water the seedlings, harvest them next day and then sell on the market to make money. It’s quite a tedious daily task so I challenged myself to write a Python program that automates this process.

Disclaimer

Using automation script violates the terms of service of the game (specifically term 13.3 No Manipulation). This article does not share any snippet of the automation script, nor promote using such script in game. The sole purpose of writing this article is to share my findings when implementing image-based automation script to a complex computer program.

Please do not contact me for acquiring this script, because

  1. the script is written in such a way that it only works on my own computer
  2. I have no interest in gaining benefit from selling it
  3. I enjoy the game and I don’t wish the in-game economy system to be disrupted
  4. I will highly suspect that you work for SBI (game dev) and try to obtain my game information so that you can ban my account

I’ll try my best to answer any technical questions you may have, but nothing beyond that.

Why Image-Based?

Because other approaches won’t work. Here’s what the script is expected to do in sequence:

  1. open the chest near the south entrance on my island
  2. equip my character with a bag
  3. transfer seeds from the chest to the bag
  4. move the character to the first farm near the north end of the island
  5. harvest products by clicking nine product icons in sequence and click the “harvest” button every time the confirmation window pops up
  6. select correct seed icon from bag/inventory, click “place” button and click the right position on the farmland
  7. click on each seedling (placed seed) and click “water” button every time the confirmation window pops up
  8. move to the rest four farms in sequence and repeat the harvest-sow-water (5-7) actions
  9. move back to the chest and store all products
  10. un-equip bag and store it in the chest.

Using script to trade in a player-driven market seems like a dangerous move to me, so I choose to manually purchase seeds and store them in a chest on my island. As for selling the products, I do that manually as well for the same reason.

Just by inspecting the above activities, one may suggest using a simple macro repeater which records all mouse&keyboard activities in one instance and plays them in the next harvest cycle. Well, there are at least two reasons why a macro repeater can’t do the job.

Rabbits run around on island in a random manner

For one, there are some randomly generated rabbits running around on the island. Most mouse clicking actions recorded by a macro repeater intend to move the character to a desired position, but when repeated, these actions can land on one or more of these rabbits, so that the character would go and hunt down the rabbits instead of moving to the right position, immediately invalidating all the following mouse/keyboard actions. The other reason comes from the performance of any kind of online games. The communication between the client end and the server end isn’t always stable and that’s why lags occur from time to time when playing online games. Macro repeater doesn’t allow any lag in game because that will offset or even cancel mouse actions which will then lead to complete failure of the program.

For the script to work, it has to handle both the input and the output sides, or in others words, how the script gathers information (current character state, location, item position etc.) from the game and how it sends commands/actions to the game to control the character. The output side is easy to solve because there are multiple Python libraries that can send mouse&keyboard actions when sufficient position information is given. I’m using pyautogui for that purpose. While on the input side, the game uses an anti-cheat software which prevents third-party program from hijacking the game packets or modifying game data in RAM. Even if we are willing to take the risk of being caught by the anti-cheat software, most packets sent&received by the game, especially those containing critical position information, are encrypted (A recent Reddit post states that the packets are actually in plain text, but when I tried to look at the packets they are all encoded to say the least. I’m sure there are people who have much better knowledge about packets sniffing and they might have more direct ways in obtaining game data), and I haven’t found a way to easily obtain those information from RAM using Python. Image-based solution seems to be the only one I can rely on. The idea is that the script would “look at the screen” and figure out where the character is, where the needed items are, where the action buttons are etc. Based on the gathered information, it will think about the next action to execute and send corresponding mouse/keyboard commands.

Development Environment

The script runs on a 5-year-old laptop with an entry-level i7 CPU and an entry-level NVIDIA GPU plus a 128G SSD. It’s not the most powerful computer you might’ve seen and as a fact it can barely reach 60 FPS under high video settings in the game. To make sure that the script can run without much lagging, I adjusted the resolution of the game to the lowest setting which is a huge compromise considering that every action of the script is based on clear image. Nonetheless, I was surprised that the laptop can run both the game and the script at the same time.

Cross-Correlation

Before we get into the details of the script, it might help to understand the concept behind image-matching. I’m using a python library called imagesearch which offers very simple APIs to do image-search. The author of the library also wanted to automate a game so he built a Python wrapper around opencv2 and pyautogui. Here are the tutorial and the documentation of the library:

The core method to find the matching point is called cross-correlation. You may find its Wikipedia page helpful .

Source: https://gifer.com/en/2xBt

Cross-correlation of two (1-dimensional) signals generates another (1-dimensional) signal that represents the similarity of the two signals at various offsets. Cross-correlation of two (2-dimensional) images results in another (2-dimensional) image which is the “similarity heat map” of the two images. As for the two images to be compared, one comes directly from your computer screen, the other is the sample image you are searching for. On the “heat map”, the location of the highest-valued pixel is where the sample image most likely to appear on your screen, and the value of that pixel is how similar the sample image is to that region of your screen. Forgive my wordy explanation, let’s jump straight to the conclusion, there are 3 most important inputs to the image-search function, namely a screenshot of the game, a sample image and a precision score that sets the threshold of the similarity score. If the returned highest similarity score is lower than the precision score, no result will be returned by the program. Every time I use this image-search function in the script, the precision score always needs to be calibrated to reach the best result.

Screenshot Efficiency

In order to achieve short response time, each screenshot needs to be taken efficiently. When I started to work on this, I used one of the most popular Python screenshot library called pyscreenshot. Not long after I wrote the first test script, I realized that it took too long to capture the screen and the script couldn’t smoothly control the character. I could’ve switched to a computer with higher computing power but I don’t have the budget so I started to try other screenshot libraries hoping to find a more efficient one. What I found interesting in the end was that different screenshot libraries actually take different lengths of time to capture screen. I’m not sure why some libraries have better performance than the others since they are basically doing the same job. The most efficient screenshot Python library I found is called Python-MSS.

Time consumption for capturing screen 100 times with pyscreenshot and MSS

The figure above shows the time and the average CPU usage of both pyscreenshot and Python-MSS when both of them are asked to capture the screen 100 times and save to a variable. It takes 0.35 seconds for pyscreenshot to capture one frame on my laptop while it only takes 0.09 seconds for Python-MSS to do the same task. Python-MSS is almost 4 times faster than pyscreenshot while only consumes 5% more CPU, bringing a huge improvement for script performance.

Navigation

Getting to know where the character is currently at is the most complex task in the script and about 70% of the development time was spent on building a navigation system. The game itself doesn’t provide coordinate data, like longitude and latitude, or any other form of numeric representation of character’s current location.

One possible solution to obtain the current location information is to look for landmarks and find the relative location of the character with respect to the landmarks. This method may work great for 2D games but Albion is a 2.5D game and objects change shapes when camera moves.

The gate shown in the above figures is the entrance gate to the island near its south end. Notice how the shape of the gate changes when the character moves from the front to the back of the gate. Despite being subtle, this change in shape is enough to confuse the image-search function and it’s nearly impossible to find a sample image that allows the script to recognize the gate in both cases. The light condition also changes in game over time. The game scene changes from daylight to moonlight and back to daylight every hour in real life, which in my opinion is a stunning design. I love spending time staring at the reflection of stars in water during in-game nighttime, it’s such a relief from the intense PvP battles. This change of light condition also makes direct image-search useless.

AI is certainly a viable solution when dealing with this type of object recognition, but it requires too much computing power. A top-notch GPU can barely process AI object recognition at 30FPS with optimized settings. It also requires a huge amount of labelled pictures to train the model, which is not worth my time. All in all, AI is an overkill.

SIFT-matching might work well in this situation. OpenCV even provides SIFT-matching functions in Python. I didn’t consider this approach when I wrote the script but I also suspect that it would take considerable amount of computing power which my humble laptop doesn’t possess. Might give it try in my next gaming script.

Luckily, the game has a diamond-shaped mini-map at the bottom-right corner with an arrow-shaped icon to represent the current location that looks like this:

Mini-map of island

The mini-map gives critical location information with respect to the island so I decided to take advantage of it. The blue arrow that shows the character location changes direction once the character change direction, and therefore I couldn’t use image-search function to directly obtain the location of the blue arrow (which is shown as red arrow in the figure below…).

The empty map is generated by merging three normal maps; the empty map is used as blank background to find where the arrow is

To utilize the mini-map, I need an empty mini-map as background so I wrote another program to generate such empty map. Three mini-maps were captured with the character standing at various locations far from each other. These three maps went through a pixel-wise comparison and a new map was generated in such a way that each pixel of the new map must have identical pixels on at least two of the three source maps. The new map is the empty mini-map with no blue arrow on it. From there, the current mini-map on the screen is captured and compared with the empty map, and the difference between the two maps are the pixels of the blue arrow. Notice that this blue arrow is direction-dependent and the mean of these locations (vectors) doesn’t point to the exact location of the character on the mini-map. This issue takes another few steps to solve and I’m not gonna discuss the details on this. One thing I’d like to put on note is that high resolution settings help a lot for this task since we are dealing with really small images while trying to obtain very accurate information.

Once we know the coordinate of the arrow on mini-map (A) and the destination coordinate on mini-map (B), a vector pointing from A to B can be obtained. Re-position this vector so that it starts from the character’s feet on screen. Where the vector is pointing now is where the mouse should click to move the character. This operation is looped until the arrow’s position overlaps with destination position.

When obstacles exist, multiple “bridge points” should be used to guide the character

That’s not the end of the story. Right now the navigation system can only lead the character to move in a straight line and obstacles on this line can easily block the way. Multiple “bridge points” around each obstacle are used to guide the character to bypass the obstacle.

Notification System

Problems can happen from nowhere, this is especially true for online games. It is necessary to have a notification system that sends me alert whenever there’s an issue with the script. I have an Android phone and I’ve used few notification apps that have APIs for Python. Unfortunately, most of them are shut down and the rest now charge money per use. I spent some time searching but couldn’t find a good & free platform that can push notifications to my phone. But then I realized that I don’t really need a notification platform – any SNS with open API will work. It didn’t take too long to find a Python library called fbchat that can login to Facebook and send message, so my farming bot now sends message to me like this (apparently I never keep enough wheat seeds in the crate):

Messages sent on Facebook Messenger by the script

Miscellaneous

You are a really good reader if you can reach here, just bear with me for another few notes I’d like to share.

  • Keep in mind that unpredictable errors can occur from time to time due to lagging or bug of game, so it’s a good idea to track the progress and save it to another file on disk. I use openpyxl to read&write data into excel files.
  • The script sends mouse&keyboard command multiple times per second, this is dangerous because when it encounters problems you would have a hard time to stop the script – there’s simply not enough time for you to move the cursor to the red cross and click. There’s an interrupt setting in pyautogui called pyautogui.FAILSAFE. Setting it to TRUE will turn on FAILSAFE mode. Whenever the cursor moves to the top-left region of the screen, the script will stop.
  • It’s a good habit to write a confirmation function immediately after action function to check if the action has actually been executed. When automating a complex program, one should never assume that each line of the code will run as planned.

How to Avoid Such Automation Scripts as A Game Developer?

You and your team are developing a game and you absolutely hate it when people try to automate the game, how should you prevent it from happening? You can try to build a robust anti-cheat program that constantly monitors suspicious actions from any other program, but this will inevitably consume a huge chunk of computing power and slow down the game, leading to terrible player experience. Or you can make it hard to automate by adding a human-verification pop-up before any important in-game action takes place, but this will bring players more frowns and script writers more fun. The best way to prevent any kind of automating, in my humble opinion, is to make the boring parts of the game entertaining. Take farming in Albion Online as an example, it’s just seem too boring to sow, water and harvest crops on a quiet island every single day in this PvP game. How about changing the game mechanism so that you can invade other players’ islands and steal their crops. You might end up in a fight with the island owner, and if he wins the fight, you are forced to do farming work for him in the next few days and you won’t have the opportunity to invade others’ islands, unless of course, you are willing to pay a hefty fee to bail yourself out of this slavery… Hmm, wish I could be a game designer one day.

Thanks for reading and hope you have found some notes here useful!

[qrcode]

Use Raspberry Pi as Router with PPPoE

中文版在这里

The router at my home has been troubled by the high temperature during summer days, and its performance has brought me headaches whenever I needed smooth network. Raspberry Pi is a robust mini-computer and I soon committed to the idea to turn a Raspi 3 into my new router. After 8-hour trial-and-error I finally managed to make it work on a PPPoE network and hence would like to share my experience. The following tutorial is largely based on Turn a RaspBerryPi 3 into a WiFi router-hotspot.

The Raspi I used is a Raspberry Pi 3 Model B. I expect Model B+ and Zero W to work but I didn’t test. If you are using an older version of Raspi you might need a wifi dongle.

First thing first, let’s make sure the Raspi can connect to the Internet with PPPoE. Download pppoeconf from here, and install it with sudo dpkg -i pppoeconf_1.21_all.deb. Now connect the Ethernet cable to Raspi and open pppoeconf with sudo pppoeconf. Follow the instruction and by the end you should be able to access the Internet.

Now let’s set up DHCP. Feel free to change the parameters like IP address or DNS in the following code if you know what’s going on, otherwise you can safely copy&paste all the code here to proceed. Execute the following command

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install hostapd isc-dhcp-server
sudo nano /etc/dhcp/dhcpd.conf

Comment these two lines:

option domain-name "example.org";
option domain-name-servers ns1.example.org, ns2.example.org;

Uncomment this line:

#authoritative;

Copy and paste the following code to the end of the file:

subnet 192.168.42.0 netmask 255.255.255.0 {
range 192.168.42.10 192.168.42.50;
option broadcast-address 192.168.42.255;
option routers 192.168.42.1;
default-lease-time 600;
max-lease-time 7200;
option domain-name "local";
option domain-name-servers 8.8.8.8, 8.8.4.4;
}

press ctrl+x to exit, y to save and enter to confirm.

sudo nano /etc/default/isc-dhcp-server

Change INTERFACES=”” to INTERFACES=”wlan0″. Save&exit.

sudo ifdown wlan0
sudo nano /etc/network/interfaces

Add the following lines to the end:

auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.42.1
netmask 255.255.255.0
post-up iw dev $IFACE set power_save off

Save&exit.

Configure ip of the router:

sudo ifconfig wlan0 192.168.42.1

DHCP setting is done. Time to handle wifi.

sudo nano /etc/hostapd/hostapd.conf

Add the following lines to the file:

interface=wlan0
ssid=RaspiPoweredWifi #change to name of your wifi
hw_mode=g
channel=6 #change to others if you know what you are doing
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=12345678 #change to your wifi password
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

Save&exit. Now we can set up forwarding

sudo nano /etc/sysctl.conf

Jump to the very end and add:

net.ipv4.ip_forward=1

Save&exit.
Set up iptables:

sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT

Add to startup:

sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"
sudo nano /etc/network/interfaces

Go to the end and add:

up iptables-restore < /etc/iptables.ipv4.nat

Save&exit.
Start two services:

sudo service hostapd start
sudo service isc-dhcp-server start

Reboot. Mission accomplished.

如何用树莓派自制支持PPPoE的路由器

Click here to view the English version of the tutorial

最近家里wifi连续使用时间一长就会出现速度大幅度波动,严重影响了我上王者的速度,估摸着应该是路由器扛不住夏天的高温,所以脑回路一转觉得可以用手头的树莓派替代路由器。反正树莓派3又有ethernet接口又有wifi模块,理论上实现应该很简单,然而一如往常我又给自己挖了个坑。8个小时后重新连接上文明社会互联网的我写下了这篇教程。

网上有多篇类似教程但是出于各种原因在我的树莓派3上都没有实验成功,这篇教程大多数内容借鉴自Medium上的Turn a RaspBerryPi 3 into a WiFi router-hotspot(需要科学上网)。

首先你需要一个树莓派。我用的是树莓派3代B型和最新的Raspbian系统。B+型跟B型区别不大应该没问题,zero W型应该也可以,我很想试验一下但是没钱买,老版的树莓派可能需要配上一个usb wifi模块。

家里的宽带是网线入户但是需要用运营商提供的账号密码来上网,也就是PPPoE,国内大多数家里用的应该是类似的网。我们需要一个叫pppoeconf的工具来让树莓派接上互联网,去这里下载安装包,国内的话点击“ftp.cn.debian.org/debian”下载会快一些。下载好以后sudo dpkg -i pppoeconf_1.21_all.deb来安装这个deb。接着把树莓派直接连接上网线,用sudo pppoeconf打开程序,跟着GUI一路设置就好了。这时候打开浏览器确认一下树莓派可以上网。

接着例行

sudo apt-get update
sudo apt-get dist-upgrade

安装两个程序

sudo apt-get install hostapd isc-dhcp-server

然后sudo nano /etc/dhcp/dhcpd.conf,把

option domain-name "example.org";
option domain-name-servers ns1.example.org, ns2.example.org;

这两行前面加#号注释掉。把

#authoritative;

前面的#号去掉。接着到文件的最后加上

subnet 192.168.42.0 netmask 255.255.255.0 {
range 192.168.42.10 192.168.42.50;
option broadcast-address 192.168.42.255;
option routers 192.168.42.1;
default-lease-time 600;
max-lease-time 7200;
option domain-name "local";
option domain-name-servers 114.114.114.114, 114.114.115.115; #可以换成其他的DNS服务器
}

ctrl+x退出,y接回车保存。

sudo nano /etc/default/isc-dhcp-server

把INTERFACES=””改成INTERFACES=”wlan0″来针对一下wlan0。保存退出。
编辑interfaces:

sudo ifdown wlan0
sudo nano /etc/network/interfaces

把这个

auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.42.1
netmask 255.255.255.0
post-up iw dev $IFACE set power_save off

加到最最后面。保存推出后强行改ip

sudo ifconfig wlan0 192.168.42.1

DHCP就差不多改完了。

开始设置wifi

sudo nano /etc/hostapd/hostapd.conf

填入以下内容

interface=wlan0
ssid=RaspiPoweredWifi #改成你想要的wifi名
hw_mode=g
channel=6 #频道随便选一个
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=12345678 #这里设置wifi密码
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

保存退出,wifi搞定。接下来把pppoe和wlan连接起来。

sudo nano /etc/sysctl.conf

跳到最后加上

net.ipv4.ip_forward=1

保存退出。刷一波iptables

sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT

这里有教程说应该刷ppp0,不过我亲测eth0有效,ppp0用不了。
加入启动项全家桶

sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"

切回interfaces

sudo nano /etc/network/interfaces

把这个加到最后面

up iptables-restore < /etc/iptables.ipv4.nat

保存退出。
打开两个服务

sudo service hostapd start
sudo service isc-dhcp-server start

重启。

以上。

[qrcode]

Chicken-Baking Oven Controller: An “Unnecessary” Attempt

I want to have baked chicken wings for lunch. Freshly baked chicken wings with sesame oil and celery, nothing can compare with a delicious nutrient-balanced lunch that powers me for another productive afternoon. But I only have one hour for lunch. Despite the fact that I live 5 minutes from the company, I still need 45 minutes to cook chicken wings and another 20 minutes to enjoy them. Although my manager is such a nice guy that he grants me some additional time for lunch when he hears the story, I still feel guilty for standing next to an oven doing nothing for 45 minutes. I need to find a way so that I can start cooking chicken wings by sending a command via Internet when I am still at work, and by the time I get home, the food is ready. That’s right, I want to have a chicken-baking oven controller.

The apartment that I live in has an electric range with stove on top and oven down below. At the very top is a control panel that has several knobs to control each heater coil and it looks like this:

Because I’m renting the apartment, I can’t simply take the appliance apart and add some relays to control the on/off state of the coil. All I can do is to add some kind of mechanism to the control panel and turn the oven knob physically. And because I can’t damage the electric range at all, that means no drilling, the mechanism must be fixed to the control panel with magnet, glue or adhesive tape. The knob doesn’t require a push action before turning so it makes things slightly easier.

The plan is not too complicated. A coupler that fits the shape of the knob should be placed at the end of the controller so that when the coupler turns, the knob turns. A small motor, preferably a small servo, drives the coupler and provides functionality of angle control. An MCU with WiFi module is needed to program the controller, being it Arduino, RasPi or something else. I decide to start with a NodeMCU with ESP8266 because it’s cheap, Internet-ready and powerful enough for the project.

The first preliminary design is using the NodeMCU to power a servo that turns the coupler back and forth just to see if the servo has enough torque to turn the knob. The coupler looks like this:

The front of the coupler fits the shape of the knob and the triangular cutout in the center allows servo to drive it. Very soon after the first test starts, I realize that the torque generated by one servo is simply not sufficient to turn the knob. Moreover, my servo can only turn from about 5° to 170° due to the nature of its design, but the knob needs to be turned by at least 270° otherwise I will have uncooked chicken wings. There are tutorials online that teach how to modify a cheap servo to make it turn 360° but I worry that it’ll mess up the servo library in the code and bring more issues to precise control. Using a stepper motor might be a great option as stepper motors don’t have turning boundaries and can be controlled very precisely, but they’re a little bit pricey (for me) and require more power and higher voltage (than 5V) to drive, so it’s considered a backup plan. A gearbox can be used to increase the overall output torque and partial-tooth gears can be used to extend the angle the the knob can turn. I combined gearbox and partial-tooth gears and come up with this design:

Each of the four gears on the sides is attached to a servo. The central master gear is turned by the four gears one at a time so that each servo only needs to rotate a little bit more than 90°. The pitch circle of the master gear is also larger than that of each servo gear, so that each servo can provide enough torque to rotate the master gear. The triangle insert at the end of the master gear fit in the triangle hole on the coupler so that the knob can be turned. Every piece is 3D printed and assembled together. The four servo gears need to work with each other in a timely manner so that they can “pass” the master gear from one to another, and this requires quite amount of calibration and code modification. The final prototype works like this (click to see animated gif):

Everything works perfectly until I fix the housing to the range control panel with double sided tapes. The housing can hardly keep itself in position once the servos are working. The torque applied on the knob is also applied on the housing due to Newton’s third law, and because the double sided tape can’t offer good resistance along the direction parallel to the panel, the housing tends to rotate against servo gears, bringing troubles to gear meshing and causing more torque applied on the knob and the housing.

I decide to stop the project in the end as I can hardly think of a plan to comfortably fix the housing in position without damaging the appliance, on the other hand, this project doesn’t seem to be very useful because I can easily install an MCU and relays inside the oven and control everything from the Internet without bothering any mechanical design issue. But when I look back at it, although being an “unnecessary” project, it does provide me better understanding of how important the initial design is. I should’ve thought about the possible problems at the beginning and picked a different approach.

Lessons learned, time to order some chicken wings for lunch.

[qrcode]

DIY Locked Door Detector

Did you lock the door today?

Let me ask again: did you lock the door today?

Are you sure you do not want to go back and check?

Welcome back. After few times waking up and finding the apartment door unlocked, I decided to do something to save myself from my carelessness. The idea is simple: building a locked door detection system that notifies me every time I forget to lock my door. A lovely schematics soon appeared on my napkin (or draw.io):

DoorDetectionDiagram
A magnet is attached to the end of the deadbolt of my door lock. Inside the strike box is a hall effect sensor which detects the distance from the magnet. A micro-controller gathers the reading from the hall effect sensor, so now it knows whether the door is locked or not. If the door is not locked, the micro-controller would send a notification to my phone. Since there is no power socket near the door, I am going to power the micro-controller by a chargeable battery.

What a smart idea! As I submerged in my self-pride, a thread on a micro-controller community rescued me from drowning:

discussionFromParticle

 

Well, it seems someone brought the idea to table back in 2014…

 

Anyway, I am going to build it, in a 2016 way.

 

Bill of Material

. Magnet tape

tape

. Hall effect sensor – Notice that  as of today (08/03/2016) the only hall effect sensor on Sparkfun is a latching hall effect sensor (US1881) which is good for determining the polarity of a magnet but not the magnitude of magnetic field (which indicates the distance from magnet). I bought A1324LUA-T which is a linear hall effect sensor. As described by its datasheet, “the presence of a south-polarity magnetic field perpendicular to the branded surface of the package increases the output voltage from its quiescent value toward the supply voltage rail. The amount of the output voltage increase is proportional to the magnitude of the magnetic field applied. Conversely, the application of a north polarity field will decrease the output voltage from its quiescent value.” In short, if the south end of the magnet is always facing the sensor, as the distance between them decreases, the output voltage increases, and vice versa.

hallEffectSensor

. Micro-controller – Almost any major micro-controller can do the job, but for this project I am using Particle Photon because it has a built-in Wi-Fi module and easy-to-use cloud IDE, which is perfect to meet the design requirements.

photon

. SparkFun Photon battery shield – Optional, as long as you know how to power Photon with a battery and how to charge the battery, you are good to go. I choose to use the battery shield just to make life easier.

batteryShield

. Li-ion battery – I bought a 2000mAh battery with JST cable. When it comes to battery, bigger is better.

battery

. Fastener tape/Double-sided tape

. Wires

The total cost is around $50, depending on how many tech-savvy friends you have.

 

Wire them up

As shown in the schematic above, the wiring is very simple. Here is the step-by-step recipe:

  1. Set up the Photon. Here is the detailed instruction.
  2. Mount the Photon on the battery shield.
  3. Put a hall effect sensor on a table with the branded side—the uneven side—facing up. Connect the leftmost pin to the 3V3 pin on the Photon, the center pin to the GND, the rightmost pin to the A0.
  4. Cut a small piece of magnet tape and paste it on the end side of the deadbolt.
    IMG_0816
  5. Stick one part of a fastener tape on the inside wall of the strike box. Put the other part on the back of hall effect sensor. Press two parts of the fastener tape together. I bent the sensor legs to fit the wires in the strike box.IMG_0815IMG_0814
  6. Plug the battery in the battery shield. Done.

I ended up getting this on my wall:

IMG_0829

I moved to a new apartment when I was building this, so the door in this picture is different from that in others. One thing you can learn from this picture is that tape is real helpful 😛 . I am going to design a housing for this system and 3D print it out. Hopefully few weeks later I will not have this mess on  my wall.

 

A Little Test

To compare the reading of the hall effect sensor when the door is locked to that when the door is unlocked, I flashed the Photon with Tinker and set pin A0 to “analogRead”. When the door is unlocked, the reading is around 2030, and when the door is locked, the reading is nearly 0. The difference in readings is significant enough, time to move on!

 

Coding Time

Not a big fan of coding? No problem. Feel free to copy and paste the code below.

Start with Particle IDE. Create a new app and give it a cool name. The code below will do the magic.

 

void setup() {
 pinMode(A0, INPUT); // set pin A0 as input
}

void loop() {
 int starttime = 0;
 int realtime = 0;
 int notification = 0;

 while (1) {
 int val = analogRead(A0); // get reading from A0, and store the value in val
 if (val > 1900) { // when door is unlocked
 if (starttime == 0) { // if door is just unlocked
 starttime = Time.now(); // starttime is the time when door is unlocked
 } else {
 realtime = Time.now(); // realtime is the time now
 if ((realtime - starttime) > 10 && notification == 0) { // if door has been unlocked for more than 10 seconds
 Particle.publish( "unlockedDoor" ); // event "unlockedDoor" is published to Particle cloud
 notification = 1; // notification has been sent
 }
 }
 } else {
 starttime = 0;
 notification = 0;
 }
 delay(500); // loop every 0.5 seconds
 }
}

What does it do? Every time when the door is unlocked, the Photon starts to count for 10 seconds. If the door is not locked within the 10 seconds, an event called “unlockedDoor” will be published to Particle cloud.

I do not want to look at Particle console 24/7 and wait for the event to appear. Instead, I want Particle to notify me when it sees the event. on my iPhone there is an app called Boxcar which is able to push notification. If somehow I can let Particle call Boxcar API… Introducing Webhook. Webhook can be created in Particle online console under “Integrations”. The setup should look like this:

webhook1

Tada! Now if I forget to lock my door, this appears on my phone:

IMG_0831

[qrcode]