Albion Online Farming Script: Is Image-Based Automation Approach Worth Trying?

I’m a big fan of automating boring stuff on my computer. I’ve written programs in Python and JS to put orders on online shopping website, transfer data between multiple excel files and extract information from PDFs etc. etc. In the year of 2020, it’s almost guaranteed that you can find a Python library that offers full/partial access to the APIs of the specific program you’d like to automate – Selenium supports multiple languages for webpage automation/testing; pywinauto is good for automating native Windows apps; openpyxl allows you to read&write excel files without even installing MS Excel (no Macro support yet), and the list goes on. Despite the ease of use, these libraries are tied to the specific tasks they were designed to work with and therefore barely allow cross-program automation or general-purpose automation. Complex programs that intentionally close their APIs, i.e. video games, also leave little space for these libraries to be utilized. Image-based automation method, which gathers information directly from user screen, imitates how human-beings process information and thus is a promising approach for general-purpose automation.

I’ve been playing an MMORPG called Albion Online for few months. It features a player-driven economy system where most in-game items are produced, traded and consumed by players with very limited intervention from the game itself. One way to earn game currency is by farming crops. Players can buy seeds from the market, sow the seeds on their personal islands, water the seedlings, harvest them next day and then sell on the market to make money. It’s quite a tedious daily task so I challenged myself to write a Python program that automates this process.

Disclaimer

Using automation script violates the terms of service of the game (specifically term 13.3 No Manipulation). This article does not share any snippet of the automation script, nor promote using such script in game. The sole purpose of writing this article is to share my findings when implementing image-based automation script to a complex computer program.

Please do not contact me for acquiring this script, because

  1. the script is written in such a way that it only works on my own computer
  2. I have no interest in gaining benefit from selling it
  3. I enjoy the game and I don’t wish the in-game economy system to be disrupted
  4. I will highly suspect that you work for SBI (game dev) and try to obtain my game information so that you can ban my account

I’ll try my best to answer any technical questions you may have, but nothing beyond that.

Why Image-Based?

Because other approaches won’t work. Here’s what the script is expected to do in sequence:

  1. open the chest near the south entrance on my island
  2. equip my character with a bag
  3. transfer seeds from the chest to the bag
  4. move the character to the first farm near the north end of the island
  5. harvest products by clicking nine product icons in sequence and click the “harvest” button every time the confirmation window pops up
  6. select correct seed icon from bag/inventory, click “place” button and click the right position on the farmland
  7. click on each seedling (placed seed) and click “water” button every time the confirmation window pops up
  8. move to the rest four farms in sequence and repeat the harvest-sow-water (5-7) actions
  9. move back to the chest and store all products
  10. un-equip bag and store it in the chest.

Using script to trade in a player-driven market seems like a dangerous move to me, so I choose to manually purchase seeds and store them in a chest on my island. As for selling the products, I do that manually as well for the same reason.

Just by inspecting the above activities, one may suggest using a simple macro repeater which records all mouse&keyboard activities in one instance and plays them in the next harvest cycle. Well, there are at least two reasons why a macro repeater can’t do the job.

Rabbits run around on island in a random manner

For one, there are some randomly generated rabbits running around on the island. Most mouse clicking actions recorded by a macro repeater intend to move the character to a desired position, but when repeated, these actions can land on one or more of these rabbits, so that the character would go and hunt down the rabbits instead of moving to the right position, immediately invalidating all the following mouse/keyboard actions. The other reason comes from the performance of any kind of online games. The communication between the client end and the server end isn’t always stable and that’s why lags occur from time to time when playing online games. Macro repeater doesn’t allow any lag in game because that will offset or even cancel mouse actions which will then lead to complete failure of the program.

For the script to work, it has to handle both the input and the output sides, or in others words, how the script gathers information (current character state, location, item position etc.) from the game and how it sends commands/actions to the game to control the character. The output side is easy to solve because there are multiple Python libraries that can send mouse&keyboard actions when sufficient position information is given. I’m using pyautogui for that purpose. While on the input side, the game uses an anti-cheat software which prevents third-party program from hijacking the game packets or modifying game data in RAM. Even if we are willing to take the risk of being caught by the anti-cheat software, most packets sent&received by the game, especially those containing critical position information, are encrypted (A recent Reddit post states that the packets are actually in plain text, but when I tried to look at the packets they are all encoded to say the least. I’m sure there are people who have much better knowledge about packets sniffing and they might have more direct ways in obtaining game data), and I haven’t found a way to easily obtain those information from RAM using Python. Image-based solution seems to be the only one I can rely on. The idea is that the script would “look at the screen” and figure out where the character is, where the needed items are, where the action buttons are etc. Based on the gathered information, it will think about the next action to execute and send corresponding mouse/keyboard commands.

Development Environment

The script runs on a 5-year-old laptop with an entry-level i7 CPU and an entry-level NVIDIA GPU plus a 128G SSD. It’s not the most powerful computer you might’ve seen and as a fact it can barely reach 60 FPS under high video settings in the game. To make sure that the script can run without much lagging, I adjusted the resolution of the game to the lowest setting which is a huge compromise considering that every action of the script is based on clear image. Nonetheless, I was surprised that the laptop can run both the game and the script at the same time.

Cross-Correlation

Before we get into the details of the script, it might help to understand the concept behind image-matching. I’m using a python library called imagesearch which offers very simple APIs to do image-search. The author of the library also wanted to automate a game so he built a Python wrapper around opencv2 and pyautogui. Here are the tutorial and the documentation of the library:

The core method to find the matching point is called cross-correlation. You may find its Wikipedia page helpful .

Source: https://gifer.com/en/2xBt

Cross-correlation of two (1-dimensional) signals generates another (1-dimensional) signal that represents the similarity of the two signals at various offsets. Cross-correlation of two (2-dimensional) images results in another (2-dimensional) image which is the “similarity heat map” of the two images. As for the two images to be compared, one comes directly from your computer screen, the other is the sample image you are searching for. On the “heat map”, the location of the highest-valued pixel is where the sample image most likely to appear on your screen, and the value of that pixel is how similar the sample image is to that region of your screen. Forgive my wordy explanation, let’s jump straight to the conclusion, there are 3 most important inputs to the image-search function, namely a screenshot of the game, a sample image and a precision score that sets the threshold of the similarity score. If the returned highest similarity score is lower than the precision score, no result will be returned by the program. Every time I use this image-search function in the script, the precision score always needs to be calibrated to reach the best result.

Screenshot Efficiency

In order to achieve short response time, each screenshot needs to be taken efficiently. When I started to work on this, I used one of the most popular Python screenshot library called pyscreenshot. Not long after I wrote the first test script, I realized that it took too long to capture the screen and the script couldn’t smoothly control the character. I could’ve switched to a computer with higher computing power but I don’t have the budget so I started to try other screenshot libraries hoping to find a more efficient one. What I found interesting in the end was that different screenshot libraries actually take different lengths of time to capture screen. I’m not sure why some libraries have better performance than the others since they are basically doing the same job. The most efficient screenshot Python library I found is called Python-MSS.

Time consumption for capturing screen 100 times with pyscreenshot and MSS

The figure above shows the time and the average CPU usage of both pyscreenshot and Python-MSS when both of them are asked to capture the screen 100 times and save to a variable. It takes 0.35 seconds for pyscreenshot to capture one frame on my laptop while it only takes 0.09 seconds for Python-MSS to do the same task. Python-MSS is almost 4 times faster than pyscreenshot while only consumes 5% more CPU, bringing a huge improvement for script performance.

Navigation

Getting to know where the character is currently at is the most complex task in the script and about 70% of the development time was spent on building a navigation system. The game itself doesn’t provide coordinate data, like longitude and latitude, or any other form of numeric representation of character’s current location.

One possible solution to obtain the current location information is to look for landmarks and find the relative location of the character with respect to the landmarks. This method may work great for 2D games but Albion is a 2.5D game and objects change shapes when camera moves.

The gate shown in the above figures is the entrance gate to the island near its south end. Notice how the shape of the gate changes when the character moves from the front to the back of the gate. Despite being subtle, this change in shape is enough to confuse the image-search function and it’s nearly impossible to find a sample image that allows the script to recognize the gate in both cases. The light condition also changes in game over time. The game scene changes from daylight to moonlight and back to daylight every hour in real life, which in my opinion is a stunning design. I love spending time staring at the reflection of stars in water during in-game nighttime, it’s such a relief from the intense PvP battles. This change of light condition also makes direct image-search useless.

AI is certainly a viable solution when dealing with this type of object recognition, but it requires too much computing power. A top-notch GPU can barely process AI object recognition at 30FPS with optimized settings. It also requires a huge amount of labelled pictures to train the model, which is not worth my time. All in all, AI is an overkill.

SIFT-matching might work well in this situation. OpenCV even provides SIFT-matching functions in Python. I didn’t consider this approach when I wrote the script but I also suspect that it would take considerable amount of computing power which my humble laptop doesn’t possess. Might give it try in my next gaming script.

Luckily, the game has a diamond-shaped mini-map at the bottom-right corner with an arrow-shaped icon to represent the current location that looks like this:

Mini-map of island

The mini-map gives critical location information with respect to the island so I decided to take advantage of it. The blue arrow that shows the character location changes direction once the character change direction, and therefore I couldn’t use image-search function to directly obtain the location of the blue arrow (which is shown as red arrow in the figure below…).

The empty map is generated by merging three normal maps; the empty map is used as blank background to find where the arrow is

To utilize the mini-map, I need an empty mini-map as background so I wrote another program to generate such empty map. Three mini-maps were captured with the character standing at various locations far from each other. These three maps went through a pixel-wise comparison and a new map was generated in such a way that each pixel of the new map must have identical pixels on at least two of the three source maps. The new map is the empty mini-map with no blue arrow on it. From there, the current mini-map on the screen is captured and compared with the empty map, and the difference between the two maps are the pixels of the blue arrow. Notice that this blue arrow is direction-dependent and the mean of these locations (vectors) doesn’t point to the exact location of the character on the mini-map. This issue takes another few steps to solve and I’m not gonna discuss the details on this. One thing I’d like to put on note is that high resolution settings help a lot for this task since we are dealing with really small images while trying to obtain very accurate information.

Once we know the coordinate of the arrow on mini-map (A) and the destination coordinate on mini-map (B), a vector pointing from A to B can be obtained. Re-position this vector so that it starts from the character’s feet on screen. Where the vector is pointing now is where the mouse should click to move the character. This operation is looped until the arrow’s position overlaps with destination position.

When obstacles exist, multiple “bridge points” should be used to guide the character

That’s not the end of the story. Right now the navigation system can only lead the character to move in a straight line and obstacles on this line can easily block the way. Multiple “bridge points” around each obstacle are used to guide the character to bypass the obstacle.

Notification System

Problems can happen from nowhere, this is especially true for online games. It is necessary to have a notification system that sends me alert whenever there’s an issue with the script. I have an Android phone and I’ve used few notification apps that have APIs for Python. Unfortunately, most of them are shut down and the rest now charge money per use. I spent some time searching but couldn’t find a good & free platform that can push notifications to my phone. But then I realized that I don’t really need a notification platform – any SNS with open API will work. It didn’t take too long to find a Python library called fbchat that can login to Facebook and send message, so my farming bot now sends message to me like this (apparently I never keep enough wheat seeds in the crate):

Messages sent on Facebook Messenger by the script

Miscellaneous

You are a really good reader if you can reach here, just bear with me for another few notes I’d like to share.

  • Keep in mind that unpredictable errors can occur from time to time due to lagging or bug of game, so it’s a good idea to track the progress and save it to another file on disk. I use openpyxl to read&write data into excel files.
  • The script sends mouse&keyboard command multiple times per second, this is dangerous because when it encounters problems you would have a hard time to stop the script – there’s simply not enough time for you to move the cursor to the red cross and click. There’s an interrupt setting in pyautogui called pyautogui.FAILSAFE. Setting it to TRUE will turn on FAILSAFE mode. Whenever the cursor moves to the top-left region of the screen, the script will stop.
  • It’s a good habit to write a confirmation function immediately after action function to check if the action has actually been executed. When automating a complex program, one should never assume that each line of the code will run as planned.

How to Avoid Such Automation Scripts as A Game Developer?

You and your team are developing a game and you absolutely hate it when people try to automate the game, how should you prevent it from happening? You can try to build a robust anti-cheat program that constantly monitors suspicious actions from any other program, but this will inevitably consume a huge chunk of computing power and slow down the game, leading to terrible player experience. Or you can make it hard to automate by adding a human-verification pop-up before any important in-game action takes place, but this will bring players more frowns and script writers more fun. The best way to prevent any kind of automating, in my humble opinion, is to make the boring parts of the game entertaining. Take farming in Albion Online as an example, it’s just seem too boring to sow, water and harvest crops on a quiet island every single day in this PvP game. How about changing the game mechanism so that you can invade other players’ islands and steal their crops. You might end up in a fight with the island owner, and if he wins the fight, you are forced to do farming work for him in the next few days and you won’t have the opportunity to invade others’ islands, unless of course, you are willing to pay a hefty fee to bail yourself out of this slavery… Hmm, wish I could be a game designer one day.

Thanks for reading and hope you have found some notes here useful!

[qrcode]

Ubuntu一年使用小记

自打中美贸易战开始我就考虑把主力笔记本从Windows切到Ubuntu了,但是一步到位把所有文件转移到开源系统上容易步子大扯着蛋,所以去年夏天的时候翻出来了箱底吃灰的老笔记本,整个硬盘格式化然后装了Ubuntu 18.04。以前断断续续也算用过几年Linux系统,基本的操作还是会的,但是从来没正儿八经作为个人系统使用过。这次也本来准备只是练练手尝尝鲜,结果真香来得突然以致一发不可收拾,很快这台Ubuntu本就成了我的主力机,让新买的Windows本成了偶尔临幸的游戏机。Ubuntu伴随了我整个研究生暑假的毕业设计,从CAD建模到写论文一波带走。平时玩玩十字军,打打帝国三也完全没压力。一年多过去了我大部分时间还是在用Ubuntu处理日常电脑事务。虽然这不是一款适合所有人(甚至不适合大部分人)的操作系统,但是这两年的Ubuntu较之之前有了很多不错的更新,应该值得更多人重新审视。如果你对开源系统抱有好感,或者受够了MacBook的小花伞,或者想省点买正版Windows的钱去steam消费,或者只是想装个叉,现在绝对是个尝试Ubuntu的好时机。我想在这里分享一下自己使用Ubuntu的一些体验,无意鼓吹开源系统,但是如果你也有想法让Linux成为自己的个人系统,希望本文能给你一些启发。

先来说一说Ubuntu的一些让我不习惯的地方。当然有些不习惯只是因为我不是一个专业的程序员,有些系统层面的东西确实也没有很理解。

  1. 首先当然是Linux系统的通病:很多主流软件不支持Ubuntu。比如微信是不支持Linux的,虽然网上有教程用wine来实现嫁接,但是配置过程非常繁琐,装完软件要装字体,装完字体要装任务栏,大概要花二十分钟才能比较完美地用上。搜狗输入法有Linux版,但是安装同样繁琐,而且最新版搜狗输入法有个致命的bug,会无缘无故占用大量cpu,所以我最后还是忍痛换成了谷歌输入法(稳定,但是已经停更了)。系统自带的中文输入被原教旨主义者推崇,但是个人觉得还没智能ABC好用。想用Windows上的专属软件,wine将是你绕不过的坎。简单的来说,wine是一个嫁接工具,能够把Windows程序里的库换成开源的库,从而实现在Linux系统上跑Windows程序的目的。给不同的程序配置wine是一个不断试错的过程,如果你配置的是一个大型冷门程序而且网上找不到太多资料的话,可能要花几个小时调试而且结果不一定能让你满意。大量的中文软件不支持或是停止支持Linux系统确实也是一个令人失望的事实。
  2. 就算软件支持Ubuntu,安装软件本身也常常很麻烦。一定一定要在安装前仔细阅读官方安装文档。Windows上双击一下exe文件就任务完成,MacOS上把图标拖到应用程序文件夹就搞定,Ubuntu上则有一大堆不同的安装方法。Linux distros之间安装包也不一定通用,下载错了安装包那就肯定装不上。总之一句话,RTFM。有时候几个软件会共同一个库,在你卸载其中一个软件的时候可能会不小心把这个库删了,这样其他软件都用不了了。虽然我碰到的一般不是什么大问题,重新安装一下就好了,但是好像挺多人不太喜欢这种情况。越来越多的软件现在选择把所有的库封装在一起,放到snap store上提供下载,有点走Windows软件路线的感觉哈哈。官方在新的几个系统里面加强了对snap store的支持,应该也是觉得没必要为了一点磁盘空间搞得软件管理那么麻烦。不过对于大部分常用软件,Ubuntu Software商店已经能做到点一下自动安装了,跟安卓苹果手机系统使用感觉一模一样,门槛非常低。
  3. 尽管很多的功能已经可以用GUI实现了,终端/命令行还是必不可少的工具,不过没人会指望你记住多少命令。大部分情况下你需要的命令都能直接在网上复制粘贴,连键盘都省了。我每次主动ctrl+alt+T八成也是敲敲sudo apt update/full-upgrade/install/purge,而这些命令也大都能在软件中心里面搞定,我只是懒得用鼠标。sudo完了输密码是不会显示出来的,输完密码直接回车就行,不要怀疑自己键盘的质量。
  4. 我没搞懂为什么会有GNOME这个桌面坏境,准确的来说我既没搞懂GNOME也没搞懂桌面坏境。有时候在桌面上选中一个文件,ctrl+x剪切,再去文件夹里按ctrl+v粘贴,文件却还在桌面上躺着。很灵异。后来我发现文件夹里用的大部分快捷键在桌面上都用不了,不过这个问题对我影响不大,我也没认真研究,现在操作文件永远是在文件管理器里操作。
  5. 装好的软件会显示在应用软件列表里面,但是这个列表默认不能手动排序。就像你手机里装了很多app,但是不许你改图标的位置,每次下载一个新的app都要找它在哪里。 (20.10 更新后软件列表可以手动排序和分组了,跟手机app排序感觉差不多。)
  6. 时不时会碰见bug。好几年前我也装过Ubuntu作为个人系统,但是每两次关机就有一次关不上的情况,要么是点了关机没反应,要么是进入关机界面但是屏幕不灭,所以当时受不了这些问题用了一礼拜就换回了Windows。18.04也有电脑待机唤醒后桌面花屏的问题,后来过了大概一年问题才解决。开源社区毕竟人力有限,有时候一个bug需要花挺长时间才能修复。

其实上面这些问题都不是问题,只要你愿意花时间学习摸索找到对应的系统配置文件就可以自己修改设置了,毕竟开源系统意味着你对系统有近乎无限的控制权。如果你是软件大佬,甚至还可以直接参与未来的系统开发。不过作为一个普通(懒)用户,我还是希望未来Ubuntu有更多衣来伸手饭来张口的设计。

当然Ubuntu同样也有很多让人喜欢的地方:

  1. 系统本身免费,系统升级免费,大量常用软件免费,而且几乎没有广告。Windows上常用的付费软件很多都能用免费开源软件替代,Office三件套几乎可以用LibreOffice无缝取代,Photoshop常用功能也可以用GIMP实现。专用软件可能很少有支持Linux系统的,但是一般都会有开源或者在线的解决方案,比如机械3D建模就可以用开源的FreeCAD或者在线的OnShape。
  2. 系统本身非常清爽,没有Windows那种流氓软件开机弹一堆窗口,或着开机开一半突然要花一个小时更新的情况。而且更新以后一般能感觉到性能提升,而不是像MacOS那样越更越慢,再更死机。
  3. Steam对Ubuntu支持极佳,向G胖致敬。这可能是我从Windows转向Ubuntu的最后一根稻草。Steam为了让Linux用户能以原生游戏的体验玩上Windows游戏,在2018年推出了自己魔改的wine,叫做Proton。Proton可以让大量的Windows游戏直接在Linux系统上运行,其中不乏一些3A大作。想知道自己喜欢的steam游戏能不能在Linux上运行,只需要去protondb搜一下就知道。像巫师3,GTA V,老滚5,刀塔,CSGO,方舟进化还有我喜欢的十字军之王2都能近乎完美在Ubuntu上直接玩。Valve自己做的游戏大部分都原生支持Linux,就连微软做的游戏比如新的帝国系列重置版也能直接玩。不过Proton好像对大型网游以及单机游戏的网络对战功能支持差一些,像吃鸡就玩不了。Lutris是另一个在Linux上玩Windows游戏的解决方案,可能有的proton解决不了的问题可以用Lutris解决,反正原理都差不多,Lutris用的是香草wine。
  4. Ubuntu在Linux distros里面是最流行的个人操作系统,所以很多软件针对Linux的测试会直接拿Ubuntu跑,因而对Ubuntu的兼容性也最好。而且Ubuntu的界面设计确实蛮好看的,光从外观上来说不输MacOS。
  5. 没病毒,享受裸奔的快乐。
  6. 对硬件要求低,老电脑也可以跑Ubuntu。树莓派的官方指定系统raspian也是Linux distro,所以用惯了树莓派再用Ubuntu会很容易,反之亦然。说到硬件,前几年的Ubuntu有点过于强调纯血开源的意思,对显卡驱动兼容性很差。后来系统默认使用开源驱动,但是增加了官方专有驱动的选项。现在Ubuntu对N卡驱动有了更方便的支持和管理,估计不久A卡也能很方便地安装和管理驱动了。这是开源社区向闭源社区的妥协,我觉得也是吸引更多用户从而壮大开源社区的很重要的一步棋。
  7. Ubuntu对双系统的支持也很好,如果用着不习惯可以随时再装一个Windows或者MacOS。

最后强行升华一下本文的主题,从宏观上来说一下为什么我个人选择现在投奔开源阵营。其实这一次的贸易摩擦应该也让更多人意识到开源软件在整个软件生态中的重要性。大型私有软件公司不可避免地会被政客当作政治筹码与竞争国博弈,这不仅对于研究机构和大型企业是致命的打击,对于个人用户也会造成很大的不便。推广开源软件不仅对中国这一类发展中国家短期有利,长远来看对于世界上任何一个没有垄断软件行业的国家都是利大于弊。专有软件或者专有系统也许永远都不会消失,但是当越来越多的人开始习惯使用开源软件,属于用户的选择自由将不会被任何国家的任何政策所剥夺。

所以推荐大家有空试一试Ubuntu或者其他的Linux distro,感受一下开源的快乐。

[qrcode]

Use Raspberry Pi as Router with PPPoE

中文版在这里

The router at my home has been troubled by the high temperature during summer days, and its performance has brought me headaches whenever I needed smooth network. Raspberry Pi is a robust mini-computer and I soon committed to the idea to turn a Raspi 3 into my new router. After 8-hour trial-and-error I finally managed to make it work on a PPPoE network and hence would like to share my experience. The following tutorial is largely based on Turn a RaspBerryPi 3 into a WiFi router-hotspot.

The Raspi I used is a Raspberry Pi 3 Model B. I expect Model B+ and Zero W to work but I didn’t test. If you are using an older version of Raspi you might need a wifi dongle.

First thing first, let’s make sure the Raspi can connect to the Internet with PPPoE. Download pppoeconf from here, and install it with sudo dpkg -i pppoeconf_1.21_all.deb. Now connect the Ethernet cable to Raspi and open pppoeconf with sudo pppoeconf. Follow the instruction and by the end you should be able to access the Internet.

Now let’s set up DHCP. Feel free to change the parameters like IP address or DNS in the following code if you know what’s going on, otherwise you can safely copy&paste all the code here to proceed. Execute the following command

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install hostapd isc-dhcp-server
sudo nano /etc/dhcp/dhcpd.conf

Comment these two lines:

option domain-name "example.org";
option domain-name-servers ns1.example.org, ns2.example.org;

Uncomment this line:

#authoritative;

Copy and paste the following code to the end of the file:

subnet 192.168.42.0 netmask 255.255.255.0 {
range 192.168.42.10 192.168.42.50;
option broadcast-address 192.168.42.255;
option routers 192.168.42.1;
default-lease-time 600;
max-lease-time 7200;
option domain-name "local";
option domain-name-servers 8.8.8.8, 8.8.4.4;
}

press ctrl+x to exit, y to save and enter to confirm.

sudo nano /etc/default/isc-dhcp-server

Change INTERFACES=”” to INTERFACES=”wlan0″. Save&exit.

sudo ifdown wlan0
sudo nano /etc/network/interfaces

Add the following lines to the end:

auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.42.1
netmask 255.255.255.0
post-up iw dev $IFACE set power_save off

Save&exit.

Configure ip of the router:

sudo ifconfig wlan0 192.168.42.1

DHCP setting is done. Time to handle wifi.

sudo nano /etc/hostapd/hostapd.conf

Add the following lines to the file:

interface=wlan0
ssid=RaspiPoweredWifi #change to name of your wifi
hw_mode=g
channel=6 #change to others if you know what you are doing
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=12345678 #change to your wifi password
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

Save&exit. Now we can set up forwarding

sudo nano /etc/sysctl.conf

Jump to the very end and add:

net.ipv4.ip_forward=1

Save&exit.
Set up iptables:

sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT

Add to startup:

sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"
sudo nano /etc/network/interfaces

Go to the end and add:

up iptables-restore < /etc/iptables.ipv4.nat

Save&exit.
Start two services:

sudo service hostapd start
sudo service isc-dhcp-server start

Reboot. Mission accomplished.

如何用树莓派自制支持PPPoE的路由器

Click here to view the English version of the tutorial

最近家里wifi连续使用时间一长就会出现速度大幅度波动,严重影响了我上王者的速度,估摸着应该是路由器扛不住夏天的高温,所以脑回路一转觉得可以用手头的树莓派替代路由器。反正树莓派3又有ethernet接口又有wifi模块,理论上实现应该很简单,然而一如往常我又给自己挖了个坑。8个小时后重新连接上文明社会互联网的我写下了这篇教程。

网上有多篇类似教程但是出于各种原因在我的树莓派3上都没有实验成功,这篇教程大多数内容借鉴自Medium上的Turn a RaspBerryPi 3 into a WiFi router-hotspot(需要科学上网)。

首先你需要一个树莓派。我用的是树莓派3代B型和最新的Raspbian系统。B+型跟B型区别不大应该没问题,zero W型应该也可以,我很想试验一下但是没钱买,老版的树莓派可能需要配上一个usb wifi模块。

家里的宽带是网线入户但是需要用运营商提供的账号密码来上网,也就是PPPoE,国内大多数家里用的应该是类似的网。我们需要一个叫pppoeconf的工具来让树莓派接上互联网,去这里下载安装包,国内的话点击“ftp.cn.debian.org/debian”下载会快一些。下载好以后sudo dpkg -i pppoeconf_1.21_all.deb来安装这个deb。接着把树莓派直接连接上网线,用sudo pppoeconf打开程序,跟着GUI一路设置就好了。这时候打开浏览器确认一下树莓派可以上网。

接着例行

sudo apt-get update
sudo apt-get dist-upgrade

安装两个程序

sudo apt-get install hostapd isc-dhcp-server

然后sudo nano /etc/dhcp/dhcpd.conf,把

option domain-name "example.org";
option domain-name-servers ns1.example.org, ns2.example.org;

这两行前面加#号注释掉。把

#authoritative;

前面的#号去掉。接着到文件的最后加上

subnet 192.168.42.0 netmask 255.255.255.0 {
range 192.168.42.10 192.168.42.50;
option broadcast-address 192.168.42.255;
option routers 192.168.42.1;
default-lease-time 600;
max-lease-time 7200;
option domain-name "local";
option domain-name-servers 114.114.114.114, 114.114.115.115; #可以换成其他的DNS服务器
}

ctrl+x退出,y接回车保存。

sudo nano /etc/default/isc-dhcp-server

把INTERFACES=””改成INTERFACES=”wlan0″来针对一下wlan0。保存退出。
编辑interfaces:

sudo ifdown wlan0
sudo nano /etc/network/interfaces

把这个

auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.42.1
netmask 255.255.255.0
post-up iw dev $IFACE set power_save off

加到最最后面。保存推出后强行改ip

sudo ifconfig wlan0 192.168.42.1

DHCP就差不多改完了。

开始设置wifi

sudo nano /etc/hostapd/hostapd.conf

填入以下内容

interface=wlan0
ssid=RaspiPoweredWifi #改成你想要的wifi名
hw_mode=g
channel=6 #频道随便选一个
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=12345678 #这里设置wifi密码
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

保存退出,wifi搞定。接下来把pppoe和wlan连接起来。

sudo nano /etc/sysctl.conf

跳到最后加上

net.ipv4.ip_forward=1

保存退出。刷一波iptables

sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT

这里有教程说应该刷ppp0,不过我亲测eth0有效,ppp0用不了。
加入启动项全家桶

sudo sh -c "iptables-save > /etc/iptables.ipv4.nat"

切回interfaces

sudo nano /etc/network/interfaces

把这个加到最后面

up iptables-restore < /etc/iptables.ipv4.nat

保存退出。
打开两个服务

sudo service hostapd start
sudo service isc-dhcp-server start

重启。

以上。

[qrcode]

Chicken-Baking Oven Controller: An “Unnecessary” Attempt

I want to have baked chicken wings for lunch. Freshly baked chicken wings with sesame oil and celery, nothing can compare with a delicious nutrient-balanced lunch that powers me for another productive afternoon. But I only have one hour for lunch. Despite the fact that I live 5 minutes from the company, I still need 45 minutes to cook chicken wings and another 20 minutes to enjoy them. Although my manager is such a nice guy that he grants me some additional time for lunch when he hears the story, I still feel guilty for standing next to an oven doing nothing for 45 minutes. I need to find a way so that I can start cooking chicken wings by sending a command via Internet when I am still at work, and by the time I get home, the food is ready. That’s right, I want to have a chicken-baking oven controller.

The apartment that I live in has an electric range with stove on top and oven down below. At the very top is a control panel that has several knobs to control each heater coil and it looks like this:

Because I’m renting the apartment, I can’t simply take the appliance apart and add some relays to control the on/off state of the coil. All I can do is to add some kind of mechanism to the control panel and turn the oven knob physically. And because I can’t damage the electric range at all, that means no drilling, the mechanism must be fixed to the control panel with magnet, glue or adhesive tape. The knob doesn’t require a push action before turning so it makes things slightly easier.

The plan is not too complicated. A coupler that fits the shape of the knob should be placed at the end of the controller so that when the coupler turns, the knob turns. A small motor, preferably a small servo, drives the coupler and provides functionality of angle control. An MCU with WiFi module is needed to program the controller, being it Arduino, RasPi or something else. I decide to start with a NodeMCU with ESP8266 because it’s cheap, Internet-ready and powerful enough for the project.

The first preliminary design is using the NodeMCU to power a servo that turns the coupler back and forth just to see if the servo has enough torque to turn the knob. The coupler looks like this:

The front of the coupler fits the shape of the knob and the triangular cutout in the center allows servo to drive it. Very soon after the first test starts, I realize that the torque generated by one servo is simply not sufficient to turn the knob. Moreover, my servo can only turn from about 5° to 170° due to the nature of its design, but the knob needs to be turned by at least 270° otherwise I will have uncooked chicken wings. There are tutorials online that teach how to modify a cheap servo to make it turn 360° but I worry that it’ll mess up the servo library in the code and bring more issues to precise control. Using a stepper motor might be a great option as stepper motors don’t have turning boundaries and can be controlled very precisely, but they’re a little bit pricey (for me) and require more power and higher voltage (than 5V) to drive, so it’s considered a backup plan. A gearbox can be used to increase the overall output torque and partial-tooth gears can be used to extend the angle the the knob can turn. I combined gearbox and partial-tooth gears and come up with this design:

Each of the four gears on the sides is attached to a servo. The central master gear is turned by the four gears one at a time so that each servo only needs to rotate a little bit more than 90°. The pitch circle of the master gear is also larger than that of each servo gear, so that each servo can provide enough torque to rotate the master gear. The triangle insert at the end of the master gear fit in the triangle hole on the coupler so that the knob can be turned. Every piece is 3D printed and assembled together. The four servo gears need to work with each other in a timely manner so that they can “pass” the master gear from one to another, and this requires quite amount of calibration and code modification. The final prototype works like this (click to see animated gif):

Everything works perfectly until I fix the housing to the range control panel with double sided tapes. The housing can hardly keep itself in position once the servos are working. The torque applied on the knob is also applied on the housing due to Newton’s third law, and because the double sided tape can’t offer good resistance along the direction parallel to the panel, the housing tends to rotate against servo gears, bringing troubles to gear meshing and causing more torque applied on the knob and the housing.

I decide to stop the project in the end as I can hardly think of a plan to comfortably fix the housing in position without damaging the appliance, on the other hand, this project doesn’t seem to be very useful because I can easily install an MCU and relays inside the oven and control everything from the Internet without bothering any mechanical design issue. But when I look back at it, although being an “unnecessary” project, it does provide me better understanding of how important the initial design is. I should’ve thought about the possible problems at the beginning and picked a different approach.

Lessons learned, time to order some chicken wings for lunch.

[qrcode]

DIY Locked Door Detector

Did you lock the door today?

Let me ask again: did you lock the door today?

Are you sure you do not want to go back and check?

Welcome back. After few times waking up and finding the apartment door unlocked, I decided to do something to save myself from my carelessness. The idea is simple: building a locked door detection system that notifies me every time I forget to lock my door. A lovely schematics soon appeared on my napkin (or draw.io):

DoorDetectionDiagram
A magnet is attached to the end of the deadbolt of my door lock. Inside the strike box is a hall effect sensor which detects the distance from the magnet. A micro-controller gathers the reading from the hall effect sensor, so now it knows whether the door is locked or not. If the door is not locked, the micro-controller would send a notification to my phone. Since there is no power socket near the door, I am going to power the micro-controller by a chargeable battery.

What a smart idea! As I submerged in my self-pride, a thread on a micro-controller community rescued me from drowning:

discussionFromParticle

 

Well, it seems someone brought the idea to table back in 2014…

 

Anyway, I am going to build it, in a 2016 way.

 

Bill of Material

. Magnet tape

tape

. Hall effect sensor – Notice that  as of today (08/03/2016) the only hall effect sensor on Sparkfun is a latching hall effect sensor (US1881) which is good for determining the polarity of a magnet but not the magnitude of magnetic field (which indicates the distance from magnet). I bought A1324LUA-T which is a linear hall effect sensor. As described by its datasheet, “the presence of a south-polarity magnetic field perpendicular to the branded surface of the package increases the output voltage from its quiescent value toward the supply voltage rail. The amount of the output voltage increase is proportional to the magnitude of the magnetic field applied. Conversely, the application of a north polarity field will decrease the output voltage from its quiescent value.” In short, if the south end of the magnet is always facing the sensor, as the distance between them decreases, the output voltage increases, and vice versa.

hallEffectSensor

. Micro-controller – Almost any major micro-controller can do the job, but for this project I am using Particle Photon because it has a built-in Wi-Fi module and easy-to-use cloud IDE, which is perfect to meet the design requirements.

photon

. SparkFun Photon battery shield – Optional, as long as you know how to power Photon with a battery and how to charge the battery, you are good to go. I choose to use the battery shield just to make life easier.

batteryShield

. Li-ion battery – I bought a 2000mAh battery with JST cable. When it comes to battery, bigger is better.

battery

. Fastener tape/Double-sided tape

. Wires

The total cost is around $50, depending on how many tech-savvy friends you have.

 

Wire them up

As shown in the schematic above, the wiring is very simple. Here is the step-by-step recipe:

  1. Set up the Photon. Here is the detailed instruction.
  2. Mount the Photon on the battery shield.
  3. Put a hall effect sensor on a table with the branded side—the uneven side—facing up. Connect the leftmost pin to the 3V3 pin on the Photon, the center pin to the GND, the rightmost pin to the A0.
  4. Cut a small piece of magnet tape and paste it on the end side of the deadbolt.
    IMG_0816
  5. Stick one part of a fastener tape on the inside wall of the strike box. Put the other part on the back of hall effect sensor. Press two parts of the fastener tape together. I bent the sensor legs to fit the wires in the strike box.IMG_0815IMG_0814
  6. Plug the battery in the battery shield. Done.

I ended up getting this on my wall:

IMG_0829

I moved to a new apartment when I was building this, so the door in this picture is different from that in others. One thing you can learn from this picture is that tape is real helpful 😛 . I am going to design a housing for this system and 3D print it out. Hopefully few weeks later I will not have this mess on  my wall.

 

A Little Test

To compare the reading of the hall effect sensor when the door is locked to that when the door is unlocked, I flashed the Photon with Tinker and set pin A0 to “analogRead”. When the door is unlocked, the reading is around 2030, and when the door is locked, the reading is nearly 0. The difference in readings is significant enough, time to move on!

 

Coding Time

Not a big fan of coding? No problem. Feel free to copy and paste the code below.

Start with Particle IDE. Create a new app and give it a cool name. The code below will do the magic.

 

void setup() {
 pinMode(A0, INPUT); // set pin A0 as input
}

void loop() {
 int starttime = 0;
 int realtime = 0;
 int notification = 0;

 while (1) {
 int val = analogRead(A0); // get reading from A0, and store the value in val
 if (val > 1900) { // when door is unlocked
 if (starttime == 0) { // if door is just unlocked
 starttime = Time.now(); // starttime is the time when door is unlocked
 } else {
 realtime = Time.now(); // realtime is the time now
 if ((realtime - starttime) > 10 && notification == 0) { // if door has been unlocked for more than 10 seconds
 Particle.publish( "unlockedDoor" ); // event "unlockedDoor" is published to Particle cloud
 notification = 1; // notification has been sent
 }
 }
 } else {
 starttime = 0;
 notification = 0;
 }
 delay(500); // loop every 0.5 seconds
 }
}

What does it do? Every time when the door is unlocked, the Photon starts to count for 10 seconds. If the door is not locked within the 10 seconds, an event called “unlockedDoor” will be published to Particle cloud.

I do not want to look at Particle console 24/7 and wait for the event to appear. Instead, I want Particle to notify me when it sees the event. on my iPhone there is an app called Boxcar which is able to push notification. If somehow I can let Particle call Boxcar API… Introducing Webhook. Webhook can be created in Particle online console under “Integrations”. The setup should look like this:

webhook1

Tada! Now if I forget to lock my door, this appears on my phone:

IMG_0831

[qrcode]

流动的新安江

开始的时候,过程是一种原因。后来,过程成了一种结果。

 

新安江千百年来以一种半透明的姿态流动着。它飘忽不定,时影时现。有时候在清晨卖煎饼果子的摊子前流动,有时候在学校的操场上流动,有时候在杯盘狼藉的酒桌上流动,有时候它不流动。

 

新安江的历史已不可考,即使岸边年纪最大的人也没有把握。他又缓缓嘬了一口旱烟,再把手搭在二郎腿上,这是他没有想过的问题。“大概七百多年了吧”,他把三只手指捏在一起,表示七。数字并没有什么具体的含义,不过是个符号罢了。在世的人们没有七百年前的记忆,所以七百,七千,七万,七十万并没有什么不同。七百多年更多像是一种敷衍,像是数独上随手填下的数字,只是为了让故事能够继续。于是我们知道,七百多年前,一个人第一次说出了“新安江”三个字,这便是新安江的诞生日。至于更早,我们甚至不能确定有没有水流经过此地。

 

于是人们不再争论新安江的起点。他们开始争论起新安江的终点。

 

江水会有一天干涸吗?鱼虾会搬家吗?海水会倒灌吗?水会涨吗?水会落吗?祭河神的水果够吗?猫会掉进水里淹死吗?芦苇会把湿地吞没吗?

 

大家慌张了起来,没有人能够回答这些问题。

 

未知是一种可怕的存在,它悄悄地侵蚀着新安江存在的事实。浣洗的妇女收起了洗衣棒;田里的男人把桶里的水倒回江里;白鹭匆匆吃了最后一口鱼;船被划到岸上;河神的祭台上空无一物;年轻人背上行李远离家乡。水流成了问题的集合。

 

有一天,一只猫掉进了水里,溅起的水花击碎了人们的梦境。村长带着几个青年人提着油灯来到岸边,走到猫掉进去的地点,仔细地观察水里的动静。

 

新安江还在流动,一如往常。

 

他们又等了十分钟。新安江还在流动,一如往常。

 

沉默弥散了全场,村民们低头望了望自己。有人悄悄往水里扔了块石头,还有人朝水面吹了口气。然而新安江的流动是一个公理,不是定理。

 

地平线托起太阳,新的一天如约来临。村里的狗聚在一起看妇女们洗衣;田里的男人把倒掉的水一滴不漏地装回桶里;鸟和船被精确地放置在水面上,再用绳子拴紧;河神的祭台上多了些热带水果;年轻人盯着水面发呆。

 

于是直到今天,新安江还像七百多年前一样地流动着。一起流动的,还有岸边的人群。他们同新安江一起时影时现,有时会不经意出现在一些人的生活里。

 

[qrcode]

动物的伪装

你可能从来没有注意过这样一个现象:有的动物是其他动物伪装的。

 

你在路上走着,电线上停着一排麻雀。

 

“一排麻雀。”你说,用手指着电线。

 

“一排麻雀。”你语气坚定,带着你十二岁应有的自信。

 

“一排麻雀。”

 

然而你没有注意到的是,其中有一只麻雀的尾巴是球状的,有毛绒感。没错,那其实是一只狗熊伪装的麻雀。你没有注意到他的尾巴,你以为他只是一只普通的麻雀。你于是错过了与狗熊的一次邂逅。

 

这是自然界普遍存在的一种现象,无时无刻不在发生。毕竟对一个动物来说,伪装成其他动物并不是一件很难的事,而伪装所带来的保护效果却是真真切切的实惠。于是我们见到了兔子伪装的老虎,毛驴伪装的狐狸,乌龟伪装的孔雀和老鼠伪装的夜莺。

 

在一个风和日丽的早晨,你站在悬崖高处,在大风中向下张望。啊,沙鸥翔集,锦鳞游泳,岸芷汀兰,郁郁青青。一种逸兴遄飞的感觉扑面而来,眼角有泪滑过。

 

但是作为一个动物园管理员,你要记住:那些对你芒刺毕露,张牙舞爪的动物其实只有四只短小的腿和一颗脆弱的心;那些夸夸其谈,知天晓地的动物往往才疏学浅,喜欢把头埋在洞里;那些羽毛靓丽,身姿婀娜的动物可能皮糙肉厚,只适合做回锅肉。那些常常四十五度角仰望天空的动物,你最好把他洗净,脱皮,破肚,去掉下水,填入葱姜蒜,酱油一勺,料酒少许,先炸后煮,伴以佐菜,出锅即食,蛋白质是牛肉的五倍。

 

不过你还是要常常给他们喂些好的,毕竟他们和你一样,都在长身体。

 

[qrcode]

最后的朋友圈

2025年8月18号,晚11点,这可能是朋友圈最后的文字。

 

麻花藤横卧在客厅的宜家KIVIK沙发上,烂醉。他左手把一瓶空的百威捂在胸口,右手掌心朝上沿着呈直线的胳膊自然下垂到地毯上,活像一尊被撂倒的自由女神像。深棕色的皮鞋搭在穿着白袜子的脚尖上,散发出猪肝试剂和双氧水反应的酸爽味道。我躲在书房,门关严,防止触发屋里的烟雾报警器。

 

他是今天下午来找我的。来之前没有联系我,没有电话,没有短信,等我听到砸门声从猫眼里往外窥视的时候,这个微微秃顶的中年男人就已经烂醉如泥了。进屋后没有说话,只是一个劲的喝酒。我也没说话,看着他喝。起先他只是面露愁色,看着桌上绿色茉莉花包装的心相印纸巾大喘粗气。目光深邃,灯光在他的眼角发生弯曲。到了后来,他突然失声痛哭了起来。一个年过半百的男人就这样哭了起来,弄得我很没有防备,不敢把心相印纸巾递给他,深怕被他的目光吸了进去。大概过了两首歌的时间,他的哭声渐渐消停了下来,取而代之的是混着鼻涕的啜泣声。

 

他说他想回到过去,回到十年前,那个属于朋友圈的青涩年代。那一年,苹果公司还没有倒闭,所有人饭前都要举着新发布的iPhone 6给自己吃的东西照一张相发朋友圈,并且低调地备注“来自 iPhone 6 客户端”。那一年,所有人都热衷于健身,他们每天跑10公里,20公里,42.195公里,然后做150个卷躯,200个深蹲,20分钟plank,40个反手摸肚脐和1小时诃陀瑜伽。他们对着健身房的镜子憋出腹肌,然后拍照上传到朋友圈,“呼,今天好累,健身完好开心”。那一年,所有人都做得一手好饭,从日本料理到南方烤肉,从草莓布丁到新英格兰大龙虾,从仰望星空派到黑暗咖喱鸡,他们无所不能。米其林采用大数据分析朋友圈的方式选出了当年的五星大厨,《舌尖上的中国3》收集了超过10万张来自朋友圈的图片,收视率大增。人们通过一边啃土豆一边刷朋友圈的方式望梅止渴,成功渡过粮食危机。疼逊公司也顺理成章地获得了当年联合国世界粮食计划署颁发的终身荣誉勋章,并在QQ上点亮。那一年,所有的人都爱看书,爱旅游,都挣了大钱,买了豪车,都有白皙的肌肤,亮丽的秀发和一段刻骨铭心的爱情。那一年,天那么蓝。

 

“我想回去。”他说。

 

我没说话,走近冰箱,又给他拿了瓶百威。扳开,白汽升腾。

 

他接过酒,没有喝,看都没看。啪,玻璃酒瓶垛在玻璃桌面上,溅起一束酒花。

 

“这个时代怎么了。”他提高嗓音。

 

“告诉我,这个时代怎么了。”

 

“为什么。。。”他说了三个字,说不下去了。泪流满面,四肢抽搐。

 

没有人发朋友圈了。整整六个月,没有一个人发朋友圈了。健身男当上了局里的领导,忙于应酬,八块腹肌还剩一块,三高,非胰岛素依赖型糖尿病晚期,健谈。美食女晋升投行白领,三个孩子,工作勤奋,最大爱好是在凌晨四点的徐家汇亲手把昨天的报表放进碎纸机,一日两餐,赛百味不加酱。摄像男在都市的夜空下寻找星光,后因长期吸入PM2.5超标空气入院,与临床大爷成为挚友,每天讨论双马饮泉残局的四种破法。文艺女在百合网上找到对象,开始听崔健。

 

麻花藤试图在沙发上翻个身,未果,重又消失在夜色里。

 

看到这篇文章的朋友请务必发一个朋友圈救救麻花藤,多谢,挺急的。

 

[qrcode]