So - I have two ways to deal with this.
The first way - I scp the IP addresses as a file (after the pi's start up) to my work computer, then generate an ssh_config file. The second way, I request the same IP from my hotspot. All my Pi's are running Debian Jessie with systemd.
The first way:
scp (or you could email it) the IP address after networking has started up.The links:
Running Services After the Network is up[Solved] SystemD, how to start a service after network is up.
Overview:
To do this with systemd, I had to create the script to scp the ip, make sure it was executable and then copied to /usr/bin/.Then, I had to create a systemd service file, copy that to /etc/systemd/system/ and enable the service.
The Details:
The script to scp the IP address:
$cat send_ip_address.shYou'll undoubtedly notice -o StrictHostKeyChecking=no. This is to avoid the nastiness of the host not being in the known_hosts file.
#! /bin/bash
IP_ADDR=`ifconfig wlan0 | grep 'inet addr:' | cut -f 12 -d ' ' | cut -c 6-`
echo $IP_ADDR > /tmp/$HOSTNAME
scp -i /home/panchod/.ssh/id_rsa -o StrictHostKeyChecking=no /tmp/$HOSTNAME panchod@192.168.0.174:/home/panchod/lan/ip_addresses/
The service file:
$ cat send_ip.serviceAnd then I wrote a script to install everything:
[Unit]
Description=Send ip address to my Toshiba on startup
After=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/send_ip_address.sh
[Install]
WantedBy=multi-user.target
WantedBy=graphical.target
$ cat set_up.sh
#! /bin/bash
sudo cp -iv send_ip_address.sh /usr/bin
sudo cp -iv send_ip.service /etc/systemd/system/send_ip.service
sudo systemctl enable send_ip.service
I made set_up.sh executable. Put them all in the same directory. Tar'd and zip'd the directory. Then, just scp'd the tarball to each pi, untar'd the tarball and ran set_up.sh. Done and done. I should have used ansible - but I'm not there yet.
On my work machine - it's kind of kooky how I have it set up, so it's probably not what you'd do. Because I use ssh for work, and therefore can't mess with .ssh/config (because it's shared) I'll tell you what I do instead.
I have an alias set up for my local lan and use a separate config file.
alias lssh='ssh -F ~/lan/ssh_config'ssh_config looks like this:
$ cat ~/lan/ssh_configand I have bash completion set-up for it:
Host localCluster.*
User root
Port 22
IdentityFile /home/panchod/.ssh/id_rsa-new_pi_net
Host localCluster.bpi-iot-ros-ai
HostName 192.168.0.198
Host localCluster.raspberry-pi-1
HostName 192.168.0.144
Host localCluster.raspberry-pi-2
HostName 192.168.0.154
Host localCluster.raspberry-pi-3
HostName 192.168.0.153
Host localCluster.raspberry-pi-4
HostName 192.168.0.117
Host localCluster.raspberry-pi-6
HostName 192.168.0.131
#=====================================================================So - when I type lssh <tab> it starts off with localCluster. I can then hit <r><tab> for raspberry-pi and it fills in the rest, then just pick which one I want.
# LAN SSH Autocomplete
#=====================================================================
_complete_lssh_hosts()
{
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
comp_lssh_hosts=`cat ~/lan/ssh_config| \
grep "^Host " | \
awk '{print $2}' | grep -v "\*"
`
COMPREPLY=( $(compgen -W "${comp_lssh_hosts}" -- $cur) )
return 0
}
complete -F _complete_lssh_hosts lssh
I also had to make a script to generate the ssh_config file. This is a little more complicated than it needs to be, but there was a reason I did it this way.
First - I have all the pi stuff in ~/lan/. As you can see above - the IP addresses are in ~/lan/ip_addresses. I put the header of ssh_config in ~/lan and called it ssh_config_preface.txt. It looks like this:
Host localCluster.*
User root
Port 22
IdentityFile /home/panchod/.ssh/id_rsa-new_pi_net
The script to generate the config that you see above:
$cat set_up_ssh.sh
#! /bin/bash
BASE_DIR=/home/panchod/lan
SSH_FILE=$BASE_DIR/ssh_config
CLUSTER_FILE=$BASE_DIR/cluster
cat /home/panchod/lan/ssh_config_preface.txt > $SSH_FILE
if grep localCluster $CLUSTER_FILE ;
then
echo "Cluster exists in $CLUSTER_FILE - replacing it"
sed -i 's/localCluster.*/localCluster /g' $CLUSTER_FILE
else
echo "Cluster doesn't exists creating it"
echo "localCluster " >> $CLUSTER_FILE
fi
for i in /home/panchod/lan/ip_addresses/* ; do
NAME=`basename $i`
#echo $NAME
IP=`cat $i`
#echo $IP
echo "Host localCluster.$NAME
HostName $IP" >> $SSH_FILE
sed -i "s/^localCluster.*$/& $IP/g" $CLUSTER_FILE
done
echo $SSH_FILE
cat $SSH_FILE
echo; echo
echo $CLUSTER_FILE
cat $CLUSTER_FILE
I also use cssh, so I have an alias for that as well: alias lcssh='cssh -c /home/panchod/lan/cluster'.
I should add a test to ensure that the files exist, but this isn't production, and I only wrote them for me, so if they fail - no one to blame but myself.
[Edit: changed the location of .ssh_config for consistency]
No comments:
Post a Comment