Thursday, March 9, 2017

Using dhclient to automatically change IP addresses of Raspberry Pi on start-up

So, in my previous post I dealt with changing IP addresses on my Pi cluster by scp'ing the new IP addresses after the Pi had started up and established a network, and creating an ssh/config file from that info.

This is fine for most occasions. But I have one instance set-up to run zookeeper. I'd have to go around and change the config files on all my other Pi's. Ansible could cure this, and I'll get there one of these days. But I'm not there today. And actually - I think I'll use the script I wrote previously to update ansibles' hosts file. I just now thought of that.

Anyway - I want the zookeeper Pi to always have the same address.

I combined this question on with what I learned about creating systemd start-up jobs to request the same ip address from my hotspot.

The gist of the answer is add send dhcp-request-address 192.168.0.XXX to /etc/dhcp/dhclient.conf.

I then created a short executable script in /usr/bin called
$cat /usr/bin/
#! /bin/bash

dhclient -r -v && dhclient -4 -d -v -cf /etc/dhcp/dhclient.conf wlan0
Then I created the service file:
# cat /etc/systemd/system/request_ip.service
Description=Change ip to 131 for myProject


I installed request_ip.service in /etc/systemd/system, and enabled it with systemctl enable request_ip.service.  Checking syslog, I know it ran. I'll update this post if I run into problems.

One way to deal with changing IP addresses on my raspberry Pi cluster

One of the annoyances I have with my router (it's actually a hotspot) is that every time I start up my Pi cluster, they all have different IP addresses than they had the day before.

So - I have two ways to deal with this.

The first way - I scp the IP addresses as a file (after the pi's start up) to my work computer, then generate an ssh_config file. The second way, I request the same IP from my hotspot. All my Pi's are running Debian Jessie with systemd.

The first way: 

scp (or you could email it) the IP address after networking has started up.

The links:

Running Services After the Network is up
[Solved] SystemD, how to start a service after network is up.


To do this with systemd, I had to create the script to scp the ip, make sure it was executable and then copied to /usr/bin/.

Then, I had to create a systemd service file, copy that to /etc/systemd/system/ and enable the service.

The Details:

The script to scp the IP address:

#! /bin/bash

IP_ADDR=`ifconfig wlan0 | grep 'inet addr:' | cut -f 12 -d ' ' | cut -c 6-`
echo $IP_ADDR > /tmp/$HOSTNAME

scp -i /home/panchod/.ssh/id_rsa -o StrictHostKeyChecking=no /tmp/$HOSTNAME panchod@
You'll undoubtedly notice -o StrictHostKeyChecking=no. This is to avoid the nastiness of the host not being in the known_hosts file.

The service file:

$ cat send_ip.service
Description=Send ip address to my Toshiba on startup


And then I wrote a script to install everything:
$ cat
#! /bin/bash

sudo cp -iv /usr/bin
sudo cp -iv send_ip.service /etc/systemd/system/send_ip.service
sudo systemctl enable send_ip.service

I made executable. Put them all in the same directory. Tar'd and zip'd the directory. Then, just scp'd the tarball to each pi, untar'd the tarball and ran Done and done. I should have used ansible - but I'm not there yet.

On my work machine - it's kind of kooky how I have it set up, so it's probably not what you'd do. Because I use ssh for work, and therefore can't mess with .ssh/config (because it's shared) I'll tell you what I do instead.

I have an alias set up for my local lan and use a separate config file.
alias lssh='ssh -F ~/lan/ssh_config'
ssh_config looks like this:
$ cat ~/lan/ssh_config
Host localCluster.*
  User root
  Port 22
  IdentityFile /home/panchod/.ssh/id_rsa-new_pi_net

Host localCluster.bpi-iot-ros-ai
Host localCluster.raspberry-pi-1
Host localCluster.raspberry-pi-2
Host localCluster.raspberry-pi-3
Host localCluster.raspberry-pi-4
Host localCluster.raspberry-pi-6
 and I have bash completion set-up for it:
#                       LAN SSH Autocomplete

    comp_lssh_hosts=`cat ~/lan/ssh_config| \
        grep "^Host " | \
        awk '{print $2}' | grep -v "\*"
    COMPREPLY=( $(compgen -W "${comp_lssh_hosts}" -- $cur) )
    return 0
complete -F _complete_lssh_hosts lssh
So - when I type lssh <tab> it starts off with localCluster. I can then hit <r><tab> for raspberry-pi and it fills in the rest, then just pick which one I want.

I also had to make a script to generate the ssh_config file. This is a little more complicated than it needs to be, but there was a reason I did it this way.

First - I have all the pi stuff in ~/lan/. As you can see above - the IP addresses are in ~/lan/ip_addresses. I put the header of ssh_config in ~/lan and called it ssh_config_preface.txt. It looks like this:
 Host localCluster.*
  User root
  Port 22
  IdentityFile /home/panchod/.ssh/id_rsa-new_pi_net

The script to generate the config that you see above:
#! /bin/bash


cat /home/panchod/lan/ssh_config_preface.txt > $SSH_FILE

if grep localCluster $CLUSTER_FILE ;
    echo "Cluster exists in $CLUSTER_FILE - replacing it"
    sed -i 's/localCluster.*/localCluster    /g' $CLUSTER_FILE
    echo "Cluster doesn't exists creating it"
    echo "localCluster        " >> $CLUSTER_FILE

for i in /home/panchod/lan/ip_addresses/* ; do

    NAME=`basename $i`
    #echo $NAME
    IP=`cat $i`
    #echo $IP

    echo "Host localCluster.$NAME
    HostName $IP" >> $SSH_FILE
     sed -i "s/^localCluster.*$/& $IP/g" $CLUSTER_FILE

echo $SSH_FILE
echo; echo

I also use cssh, so I have an alias for that as well: alias lcssh='cssh -c /home/panchod/lan/cluster'.

I should add a test to ensure that the files exist, but this isn't production,and I only wrote them for me, so if they fail - no one to blame but myself.

[Edit: changed the location of .ssh_config for consistency]