How to make your own LizardCam!

I’ve had a lizard cam of one sort or another running for a few years now. My current iteration is inexpensive (about $125 using my links below) and easy to set up, so I’d like to write a tutorial on you can set up your own.


  1. A web cam, like this one (link). I like this one because there’s a little hole in the fixture that allows you to nail it to walls.
  2. A Raspberry Pi (link). A tiny, inexpensive computer! I recommend a Raspberry Pi 3 (link), since it comes with built in Wifi, so you can position it anywhere in your home.
  3. A 64 GB mini SD card. This’ll be your Raspberry Pi’s hard drive. Quality matters here; a cheapo micro SD card will cause your Raspberry Pi to crash after a few days/weeks. I recommend going with Samsung (link).
  4. Raspbian, your operating system for the Rasberry Pi. You can download it here (link).
  5. An internet connection!


Install Raspbian

First, get Raspbian installed onto your SD card and start up the Raspberry Pi. I used the following instructions to help me get started. This should work for any savvy Windows user and should have you nearly ready to set up the cam.

NOTE: The instructions on copying the Raspbian image onto the SD card will vary depending on whether you are using Windows, Mac, or Linux.

The instructions in the video fit Windows perfectly.

Mac (link)

Linux (link)

Highlights from the video:

  1. Obtain the things that you need.
  2. Download the Raspbian image and copy it to the SD card using the appropriate method for your system.
  3. Get the system up and running, update software, and ensure a healthy configuration.

NOTE: Two more things that you should do before moving to setting up your cam is resizing your root filesystem to fill up your entire SD card and setting the correct timezone for your Raspberry Pi.

These instructions (link) should help you with adjusting the root fs size. This way, your operating system can work with a full 64 G of disk space instead of the default 4 G.

These instructions (link) should help you with adjusting the timezone for your Raspberry Pi.

Setting up the Cam using Motion

Setting up the camera itself is relatively easy. By this point, your Raspberry Pi should be powered on and you should have a command line in front of you (either by logging into the terminal directly or by starting up a terminal in the OS interface). Run the following command to install ‘Motion,’ the program that powers the web cam.

sudo apt-get install motion -y

Wait for the installation to complete.

When the installation has completed, review Motion’s configuration file at /etc/motion/motion.conf. You can use this using the ‘nano’ command on the command line or a file editor in the OS interface.

nano /etc/motion/motion.conf

You will see a default configuration file in front of you. Most of the configuration items here are fine and do not need to be modified. I do have a few items that I recommend changing, though.

Ensure that you have the right frame size set for your web cam. I use the below settings.

# Image width (pixels). Valid range: Camera dependent, default: 352
width 640

# Image height (pixels). Valid range: Camera dependent, default: 288
height 480

Make sure that you have an appropriate frame rate set. I use 20, since that lowers the network bandwidth and ensures better quality to users geographically far away from me.

# Maximum number of frames to be captured per second.
# Valid range: 2-100. Default: 100 (almost no limit).
framerate 20

# Maximum framerate for stream streams (default: 1)
stream_maxrate 15

Ensure that the ‘stream_port’ setting is changed to ’80’ from ‘0,’ and also set the ‘stream_localhost’ setting to off. This way you can actually see your over the internet!

# The mini-http server listens to this port for requests (default: 0 = disabled)
stream_port 80

# Restrict stream connections to localhost only (default: on)
stream_localhost off

There may be some other configurations that I’m missing, and there may be some other tweaks that you may want to make, but if you want to have a working configuration with few questions asked, you can find my current configuration here ( link).

Once you’ve done all that, give things a test by plugging your webcam into the Raspberry Pi and starting up Motion. You can start up Motion using the following command.

sudo /usr/bin/motion -c /etc/motion/motion.conf

All being well, you should see the light on your webcam light up, which indicates that Motion is capturing frames and. Next, test to see if you can actually see the stream by going to your Raspberry Pi’s local area network address in your browser and seeing if it loads the stream. You can find out the local area network address by running this command.

ifconfig | grep 192

When done testing, run the following command to turn Motion off.

sudo /bin/pidof motion | sudo xargs kill -9

Finishing Touches

The last things to do are setting up a schedule by which your cam runs (if that’s something that you want), positioning your cam and Raspberry Pi appropriately in your home, and making sure that the outside world can reach it by setting up pinholes through your home router.


Using Cron Jobs to Schedule your Cam

Let’s talk about scheduling the cam first. In Linux, scheduling is done using something called ‘cron jobs (link).’ Here are the cron jobs I use and an explanation of how they work and how to I install them.

#lizard cam start
0,20,40 13,14,15,16,17,18 * * * root /usr/bin/motion -c /etc/motion/motion.conf

#lizard cam stop
19,39,59 * * * * root /bin/sleep 55; /bin/pidof motion | /usr/bin/xargs /bin/kill -9

I install the cron jobs by creating a file in /etc/cron.d/ called ‘motion’, and then copying the above contents into this file. Here’s what they do.

  1. At the 0th, 20th, and 40th minute (0,20,40) of the 13th, 14th, 15th, 16th, 17th, and 18th hours (13,14,15,16,17,18) every day of the month (*) of every month (*) and every day of the week (*), start the web cam as the root user (root /usr/bin/motion -c /etc/motion/motion.conf).
  2. At the 19th, 39th, and 59th (19,39,59) minute of every hour (*) of every day of the month (*) of every month (*) and every day of the week (*), wait 55 seconds from the start of the minute and then stop the Motion process if there are any running (root /bin/sleep 55; /bin/pidof motion | /usr/bin/xargs /bin/kill -9).

You can modify the cron jobs to fit an appropriate schedule. Just make sure that you are setting it according to the Pi’s system timezone, and not whatever your local timezone actually is. If you adjusted it during the setup above, then this shouldn’t be a concern.


Always on Web Cam

If you want your web came to always be on when you start up your Raspberry Pi, then you will want to do the following.

Open up the /etc/rc.local file (link) with a text editor either on the command line or in the OS interface.

sudo nano /etc/rc.local

Put the following command into the rc.local file just above the “exit 0” line.

/usr/bin/motion -c /etc/motion/motion.conf

All being well, the next time that you plug in your Raspberry Pi your webcam should start immediately.



Assuming that you’ve made it this far without issues, you can unplug your Raspberry Pi from everything, position it in an appropriate spot in the house where you can reach both an internet cable, power, and the spot where you want your web cam, and then plug everything in!


Pinholes through your Router

If you want the outside world to be able to see your cam, then you will need to make sure that incoming traffic from the internet can pass through your router to the Raspberry Pi. Every router is a bit different, so I can’t provide detailed instructions here.

However, the basic gist of it will be the same across all platforms. Make sure that outside traffic hitting your public IP address on port 80 can reach your Raspberry Pi’s LAN address on port 80 as well. Once that’s done, then you should be able to go to your public IP address and see your web cam!


Questions and Problems?

If you’re having any issues with these instructions, then please feel free to reach out to me on Twitter. I’ll be glad to help out in any way that I can!

My Site Got Compromised!

So, read the title! Someone managed to exploit my site and install some malware. Ok, ok.. not something that I’d want to be sharing publicly, you’d think, but I disagree. Granted, I might not be proud of the fact that my site had exploitable code in it, but I also think that it’s important to tell folks – who host sites as a hobby or for a living – that it’s possible for things like this to happen. And so, now I share with you just how my site got compromised, and how I found out about it, and how I dealt with it.

Today I was doing some maintenance on my server cluster. I was installing some security software on one of my auxiliary servers and I was testing out a scan script.

Now, a contextual thing to keep in mind is that this particular auxiliary server is my failover for all of the sites I host. If I were expecting my primary content server to go down for whatever reason, I can failover to this particular secondary just by running a simple script. I have scheduled cron jobs that run every day, and they sync up all of the content that I have on my primary content server to this secondary server so that everything matches.

Anyway, I was testing out my scan script. Lo and behold, the scan turns up a single malicious file that ought not to be there. My security admin training from HostGator suddenly kicks in, and I check out the file and look at its contents. Sure enough, it’s malicious! Well! If the file is on my secondary server, it has to be on the primary server from which it’s synced, so I went over there and sure enough, there it was. I looked through my scan logs and found evidence there as well.

So, my site got compromised. Now it’s time where I go over how I dealt with it. This is a bit advanced, I think, and I’m not tailoring this precisely for new users, but it might help some folks to find their way if they need to.

First thing that you do in a situation like this, is you run the ‘stat’ command on the malicious file that your scanner found. I did this, and here is what I got.

File: `ajax-upload.php’
Size: 177962 Blocks: 352 IO Block: 4096 regular file
Device: fd02h/64770d Inode: 134650 Links: 1
Access: (0644/-rw-r–r–) Uid: ( 500/ nope) Gid: ( 500/ nope)
Access: 2015-03-22 00:05:54.445132850 -0500
Modify: 2014-10-16 13:03:52.000000000 -0500
Change: 2015-02-04 21:10:28.178217693 -0600

What does this output say? It says that this file was last Accessed on the web or on the local system at 2015-03-22 00:05:54.445132850 -0500This is the month, day, year, second, and timezone of the server (-0500 is 5 hours behind GMT). It also says that the file was last Modified on 2014-10-16 13:03:52.000000000 -0500, and last Changed on 2015-02-04 21:10:28.178217693 -0600.

What’s the difference between Modify and Change? Aren’t they the same? Well, yes and no. In this particular context, Modify refers to the last time that the file contents were changed, and Change refers to the last time that the meta data of the file changed (permissions, location, and such). So, essentially, the last time that the file contents were modified was on October 16, 2014, long before it ever made it onto my system. That’s probably when the programmer last modified the file before some script kiddy decided to make it a part of their h4ck0r bot. The Change date is what we want to pay attention to then, since this indicates when this file’s metadata (on my system) last changed.

I go through and, using the change date as a reference, look through my http access logs. I find the following: – – [04/Feb/2015:21:10:19 -0600] “GET /wp-admin/ajax-upload.php HTTP/1.0” 404 2514 “” “Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0” – – [04/Feb/2015:21:10:24 -0600] “POST /wp-admin/admin-ajax.php?action=settings_upload&page=pagelines&pageaction=import&imported=true HTTP/1.0” 200 27 “” “Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0” – – [04/Feb/2015:21:10:26 -0600] “GET /wp-admin/ajax-upload.php HTTP/1.0” 200 114 “” “Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0” – – [04/Feb/2015:21:10:27 -0600] “POST /wp-admin/ajax-upload.php HTTP/1.0” 200 2654 “” “Mozilla/4.0 (compatible; MSIE 8.0.1; Windows NT 6.1)” – – [04/Feb/2015:21:10:28 -0600] “POST /wp-admin/ajax-upload.php HTTP/1.0” 200 3486 “” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)” – – [04/Feb/2015:21:10:29 -0600] “POST /wp-admin/ajax-upload.php HTTP/1.0” 200 5522 “” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)” – – [04/Feb/2015:21:10:33 -0600] “POST /wp-admin/ajax-upload.php HTTP/1.0” 200 16233 “” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)” – – [04/Feb/2015:21:10:46 -0600] “POST /wp-admin/ajax-upload.php HTTP/1.0” 200 23382 “” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)” – – [04/Feb/2015:21:10:48 -0600] “POST /wp-admin/ajax-upload.php HTTP/1.0” 200 3583 “” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)”

Note how the time lines up in these log entries and the Change time in the ‘stat’ that I ran above? Since the logs line up with the time stamps, it means that this is where the file came from. If I follow this set of logs back to the original spot, I can see exactly what happened. – – [04/Feb/2015:21:10:19 -0600] “GET /wp-admin/ajax-upload.php HTTP/1.0” 404 2514 “” “Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0”

The malicious user checked to see if their shell existed. It doesn’t. If you look, you can see the 404 “Not Found” error code in this log entry. – – [04/Feb/2015:21:10:24 -0600] “POST /wp-admin/admin-ajax.php?action=settings_upload&page=pagelines&pageaction=import&imported=true HTTP/1.0″ 200 27 “” “Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0”

Seeing that their shell didn’t exist, they used a newly published theme exploit against the theme that I use on my site. I actually found some links for this exploit that pretty much confirmed that this is exactly where my malware came from.

And now I know where the malware came from and how my site was exploited. I can take action. I immediately updated my theme, and then looked through my entire site directory structure for any and all malware that might have been left behind. I used a ‘find’ command to do it.

find -type f -ctime +1 -ctime -90

Since I know when the original compromise happened, and since I just updated my site, I can create a narrow time frame during which the malicious user uploaded or modified files to contain their malicious code. Finding these, I can delete the files or I can clean them of bad code.

And then finally, I took action to make sure that my wp-admin folder was restricted to unknown users. In the past, I’d had .htaccess restrictions in place that were supposed to prevent unknown users from being able to access these sites. If this were working, it would have prevented this sort of compromise from ever happening. Remember from the logs above, the malicious user used a file within wp-admin to perform their exploit.

At some point, though, these restrictions stopped working properly, and… well here we are. Since my .htaccess restrictions aren’t working anymore, I enacted the restrictions at my front end layer instead, nginx. I’m weird! I use two http servers to do my bidding. I have nginx as a front end and apache as a back end. It’s sad that .htaccess doesn’t work 100%, but I can live with having the restriction in place at the nginx layer. Ultimately, however the site is secured, I’m happy.

So, that’s the tale of my rather sudden and random malware event of the day. Hopefully it’s somewhat interesting. Anyone is free to hit me up for questions on this point, and I’ll be glad to help out!

Linux Tech-tech for the Tech-tech Minded!

Hey! So, I’ve neglected this blog for long enough, and I think I’ve finally found something that I can talk about for now. Lots and lots and lots and lots of things have gone on in my life over the past few months that I’d consider blog-worthy, but I haven’t really been talking about it in a public forum largely to keep things private.

What I do wish to say is that I’m doing extraordinarily well in my new job. I feel like I’m surpassing even my own expectations with how well I’m doing, and to some degree I’m kinda afraid that I’m going to burn brightly and peak out very quickly, while everyone’s expectations of me keep climbing. But oh well, I’ve passed my own test and I feel like I’m acclimating to the new job very well.

The other thing that I want to mention on a personal note is that I’m intending to move out of my current apartment here soon. My new job affords me the ability to live alone and keep paying the bills. Aside from things that I will not discuss here, I’m mostly looking for a change of environment and a place to truly call my own. I’m actually really excited about it to a certain degree! Right now most of my things are stuffed into a teeny tiny bedroom at the current spot, and with a whole apartment to myself (even if it’s a small studio) I can spread myself out just that much more. I love changing things around and stuff, so it has me excited.

Ah, but now I will move on to other things. Past few days I’ve been writing on facebook and twitter about Linux technology things. I don’t really have a purpose in doing it. Just want to share knowledge in little snippets. I know someone out there is enjoying the reads. So, below I’ll provide the first few posts below and I’ll probably update this blog with additional items moving forward. Enjoy!


OpenVPN is really neat. It’s probably one of the best documented technologies I’ve played with so far. They’ve got a very well fleshed out getting started page here:…/open-source/documentation/howto.html

What makes OpenVPN so neat is primarily this. You can set up a network of sorts that’s completely isolated from other networks. The server will be your router, and it’ll hand out virtual IPs using “tunnels” to all of the clients that you connect to it.

Where is this useful? Well, consider that you have an ISP with dynamic IPs. Your public IP is always changing, but you want to be able to host things within your home LAN as though you have a static IP. So, you buy a small server in a datacenter somewhere (such as with, install openvpn and create a server, and then create a client on your home device to that openvpn server.

Even though your ISP IP is always changing, the tunnel IP that openvpn gives your device will never change, and ideally the public IP of that server you bought in the datacenter will never change either. Bam! You now have a static IP address for a “router” that will connect your home device to the web, and you’ve gotten around your dynamic IP address problem.


I have a love-hate relationship with FFMPEG. It’s probably the coolest and most frustrating thing I’ve dabbled in so far. FFMPEG is the technology I use to power my infamous “Lizard Cam.” You know… It’s that tech-tech that makes this thing work:

So what is FFMPEG? FFMPEG is basically a “transcoder.” It takes video or sound input, and then recodes it into another format/size, and it can do this all over the command line. For example, it can take raw input from a web cam and recode it into flash video formats. Often, the formats are referred to as “codecs.”

That’s where things get tricky, though. There are a gazillion different codecs for both video and audio. Some are only compatible with a select few. Like, AAC audio codecs won’t be compatible with flash codecs. Only mp3 will be. And then there’s a matter of whether you have compiled your FFMPEG program with support for the codecs that you’re intending to use. And then there’s the matter of figuring out the proper syntax for the command that you’re going to use to grab and transcode your audio/video input. And THEN there’s the matter of figuring out a working configuration in your FFserver (more about this in a minute) file so that you can stream it.

It very very quickly gets complicated. The documentation isn’t any help either, because it focuses more on concepts rather than a good “getting started” or on definitions, commands, compatibilities, or working setups. Then again, the documentation looks different now than it did when I first started dabbling:

FFserver, on the other hand, is the portion of the technology that allows you to stream whatever it is that you’re capturing. I wouldn’t say that it’s any easier, but it does kinda put itself together once you figure FFMPEG out.

All in all, very very neat technology, but exceedingly frustrating and poorly documented (as of a year ago, anyway).


I’m now going to present a working FFMPEG command, along with explanations of all the little bits.

nohup ffmpeg -video_size 1920×1080 -framerate 20 -f x11grab -i :0.0 -f alsa -ac 2 -i pulse http://localhost:9537/screencast.ffm &

*nohup = This detaches the command from your current terminal session. It writes the output to a file called nohup.out, and if you close your terminal session the command will stay running.
*ffmpeg = This is the actual ffmpeg command. Everything following that is a modifier/option that changes its behavior.
*video_size = This sets the size of the frame that your video will be, in this case a full 1920×1080 pixels.
*framerate = This is fairly self explanatory. This tells FFMPEG how many frames to grab per second. The higher the framerate, the higher the quality, but the higher the bitrate and the more internet bandwidth your little stream here will require. In this case, the framerate is 20, which is about standard.
*-f x11grab = The ‘-f’ stands for format. In this first case, we’re telling FFMPEG that our video format is going to be x11grab. In Linux, any desktop GUI environment is basically generated by a program called X11, so this tells FFMPEG to grab the video input from X11 directly. A screencast, if you will.
*-i :0.0 = In X11, the computer screens are referred to as displays. The -i tells FFMPEG to get input, and :0.0 refers to the first display. :0.1 would be my second computer screen. So basically, grab input from the first computer screen (in my case, my left screen).
*-f alsa -ac 2 -i pulse = Gonna do this as a whole, since they all go together. This time, the -f (format) tells FFMPEG about what audio we’re going to capture from the system. Alsa is actually a part of the Linux core kernel, and it interfaces with other programs to give a Linux computer sound. So we’re telling it to get stuff from the sound system directly. ‘-ac 2’ tells FFMPEG to create two audio channels. Meant to be played over two speakers, basically. Finally, the ‘-i pulse’ is telling FFMPEG to capture the sound from the Pulse Audio system. It’s similar to specifying the exact screen that we want our video from, like I described above.
*http://localhost:9537/screencast.ffm = FINALLY we have our output file. All that stuff that I described above is basically specifying inputs to FFMPEG and modifying that input’s parameters. This is where all that garbage goes after FFMPEG has processed it. The screencast.ffm file is a dynamic streaming file, which is designed to be streamed live over the internet for other viewers.
*& = This little ampersand basically runs your command in the background of your terminal session, so that you can run additional commands while this one runs. It sorta goes hand in hand with the nohup portion at the beginning.

An exhausting command, but a working one! As you can probably guess, this is designed to grab my actual desktop screen and stream it to the internet. If I were actually running this, you’d be able to see it here:

I originally came up with this because I couldn’t find a good Linux screencasting option out of all the options out there (like Livestream or Ustream, etc). Ended up just making my own.


First, about “command lines” in general. When I think of a command line, I think of that blinky cursor thing that appears on a black screen, and you type weird voodoo into it to make things happen on a computer. I would imagine that most people probably associate the command line (or command prompt) with hackers, computer geniuses, geeks, nerds, and things like that. But mostly hackers.

The command line is actually not a very difficult concept to grasp if it’s properly explained. Let’s break it down…

1) A command line is a place where you input commands to a computer.
2) The commands that you input are defined by a “shell.” I like to think of a shell as a type of programming language. It has a syntax (way of writing commands), defined commands that you’re allowed to use, logic, and so forth.
3) The most popular command line shell for Linux is called BASH.

I think the next place to take this is to think about the commands themselves, and how they work in general.

Most commands are actually little tiny programs in and of themselves. They perform very simple tasks, and their behavior can be modified within the overall command that you feed into your command line. Let me give you an example:

“ls” — This command is a program designed to list files and folders within a certain directory (folder).
“ls -a” — Here, I’m using ls and I’m adding a modifier (or option) to the command, telling it to list all. The ‘a’ here stands for “all.” Often, there are hidden files and folders within a directory and this modifier will make them visible.
“ls -a /home/jesse/foldername” — Here, I’m specifying a specific folder path that I want to have listed.

You see how I’m doing this? I start with my command, and I add modifiers to it, and it performs a task that I want it to perform. Easy to grasp conceptually, but much more complicated once you get into the more complicated commands, and you start stringing them together into complicated scripts (or ‘one-liners’ as some people call them).

High Memory Usage on Linux

So, past few weeks I’ve noticed a distinctly high RAM usage on my F19 desktop. It wasn’t showing up in top or htop as cache or buffer, so I figured that one of my applications was using all my memory up. My kernel starting swapping programs, which resulted in horribad performance.

At first I figured it was Chrome, so I tried using a hibernation plugin to hibernate my unused tabs. No worky. Tried shutting chrome and skype down to see if that would help and still no worky. Something was just desperately filling up all that memory.

So, downloaded a lovely little program that pie graphs your memory usage by process to see where it was all going.

And lo! For some reason, my userspace programs were taking up less than a fifth of my memory. WTF??

Time to investigate more. As I did some more research, I discovered a Linux tool called slabtop, which apparently displays the kernel’s cache memory usage. There, I found an entry called “dentry” that was taking up a full 5GB of my 10GB RAM. This is basically a directory entry cache for the filesystem, so that the kernel can reference previously accessed files more quickly. Queue another helpful article that describes how Linux uses memory!

So, did some kernel tuning to make it more prone to reclaim dentry cache and make the kernel less swappy on my machine. And I found a way to flush out the dentry cache as well, which I’ll probably do periodically anyway. Final article!

So a Thing Happened to the Server

So, and interesting thing happened just this past Friday while I was messing around playing games on my computer. I spilled water on my server.

For those who do not know, I currently use a Dell PowerEdge 1850 rack mounted server as my general purpose server for the things that I do. This server provides a platform for this website, my LizardCam, an IRC server, a Ventrilo server, and a Plex Media server that I use to enjoy my various movies and videos and things. I keep it on a desk right next to my computer. Since the actual machine is really flat, this works out well and I can just put stuff on top of it like it were a part of the desk.

Now before you think that I’m putting water on my server, no, I’m not. I’m not quite that stupid. I’m still pretty stupid though. I was keeping a big jug of water right neeext to it. And I was turning my chair and it tipped over right onto the machine. It was pretty bad and very stupid.

Now, the first instinct at the time was to turn it off. Immediately. I did a hard stop on the box and opened it up and checked for moisture and water. For the most part, it looked like it was fairly localized around the left hard drive and on the front plane. I took a few pieces of paper towel to it to try and dry it out, and removed some pieces to check for any moisture here and there. I gave it a little while to dry off, and then tried to restart it. All went well for a few minutes while the machine went through it’s various start up checks, but when it tried to spool up the disk drive for the start up check, it turned itself off.

Now, the neat thing about these Dell machines is that they will turn themselves back on after a power outage or after any event that causes them to switch off. It tried to turn itself back on. Once, then twice, and then it stayed off. That was really weird. And any time that I tried to turn it back on after that, it switched itself off again. So, at this point, I was really scared that the machine was a lost cause. I tried everything. Tried hooking up a monitor to it so that I could see any errors and got nothing. Tried to switch which power supply I was using and also got nothing. It would power up the fans for a short bit, and then switch itself off again in very short order.

So yeah, I was pretty certain that the machine was toast at that point, and it felt awful. But, I sought counsel from my various friends who are familiar with Dell products and hard ware in general. The consensus was that “Yeah, it might be toast, but you need to let it air out and dry for a few days before you try starting it up again.” So, that’s what I did, despite how impatient I was to try and restart the damned thing. The whole incident happened on a Friday, and I let it air out until Monday morning when I tried restarting again.

And here’s the really cool part. It started up and ran as though nothing had happened. I was fully expecting that it would not work, but it did! I was astonished! Very impressed with the hardware!

Oh yeah, and I’m going to try and be less stupid with this machine now. I’ve put it up on some stilts so that it sits above the desk instead of on it. That way, if there IIIS spillage, it’ll just go under the machine instead of on top of it.

So yeah, that happened.


I love communications. I love facilitating communication. That’s one of the reasons that I installed IRC on my server.

I didn’t stop at IRC, though! Now I have a fully functioning Ventrilo server as well! For those who don’t know what Ventrilo is, it’s a Voice over Internet Protocol service much like Skype is. Except, it’s a very good group chat mechanism where Skype is a little more cumbersome for group chats.

Ventrilo was surprisingly easy to set up. All I needed to do was download the server files, extract them, make a few minor configuration changes, and then run the server. It was trivially easy.

What was more difficult to do was to configure the server to run in a jailed environment. Similar to the IRC service, I’ve set up the Ventrilo server to run in a chroot jail so that if it were to ever be compromised or hacked-into it wouldn’t endanger the rest of the system. It took me a little while to figure out the proper chroot command to get this to work, but I was able to get this to work properly and I’m very happy with the results.

As a side note, the Ventrilo server was, by default, configured to run with an audio codec that isn’t very compatible with most client programs. It’s easy enough to reconfigure with the default configuration file, however, and there are plenty of options to choose from. I was able to resolve that issue just by reviewing the documentation and following the commands listed.

Anyway, Ventrilo is up and running on the box now! We’ll see if it causes any system instability as we move again.

RAM Upgrades, Promotions, and Vacations

I’m sorry for having not posted in a while. Developments haven’t exactly been fast paced for the past few days, but they’ve been there! This is going to be a fairly straightforward posting.

Yesterday, I went to the store to buy some new RAM for the server. Previously, it had two sticks of 512MB RAM in place, giving the server about 1GB of physical memory to play with. I’ve decided that isn’t enough for what the server is doing, and even though I haven’t even been taxing the machine with the applications that I’m currently running (the stream, this site, and the IRC server), I wanted to ensure that the system would always have more than it needed to run continuously. So, now instead of 1GB of RAM, the system has 2GB! I also made the swap partition smaller, shrinking it from 8GB (which the 32 bit OS can’t use anyway) to 3GB, which is less wasteful. So, more physical (fast) memory and less swap (slower, and wasteful if too large) memory. Works for me! And the system has been churning along quite well since then without any issues that I’ve seen, so I’ve been very happy.

Now, aside from server developments, all of my work and all of the projects that I have initiated with this system have seen some dividends at work. I’ve been aiming to move over from my current position to something a lot more technical, and on Wednesday it was confirmed that I would be moving over to a Security role. 😀 This was actually very exciting for me! It’s going from a spot where I’ve been for a very long time now to something completely new. More than that, its an acknowledgement of skill, a chance to learn more, and a huge professional opportunity.

Even though I’m excited, on some level I’m really apprehensive. I’m apprehensive because I’m going from a department where I know everyone and can work casually to one where I don’t know anyone at all. And the general atmosphere, while supportive, seems a bit grumpy. It’s a little unsettling; that and not knowing much of the policy and workflow around what I’m to be doing for the next few months. But that is the way of things, isn’t it? Being thrust into a new place and having to learn things? I’ve learned several times from experience that no matter how out of place you might feel now, within a few months you will feel at home with your new environment. You have to be adaptable, have a good attitude, apply yourself, and be willing to take a little bit of friendly heat from your new coworkers to make it. 🙂

And aside from that, I have other things to look forward to. I’ve finally confirmed some long needed vacation time for this November! My best friend of all time (from the UK) is going to be coming to visit me, and we’re going to be having all sorts of adventures all around the state of Texas! So far, we have an idea of what we’re going to be doing, but it’s still largely unplanned and unscheduled (the events that is, not the time off). And the best part of all of that is that all of this time off is paid. I love working for my company. ^_^

Added Some New Features to the Site | Updated Stream Page!

Greetings all! Wow, what a day… when I haven’t been in class I’ve been either working on the site or some aspect of the site or some aspect of the server or what not. Quite a lot of progress has been made.

If you haven’t noticed, then you should check out the stream page. I went about properly building an HTML environment for the stream object to sit in, and I’ve put in some good tips for everyone while they’re watching Chopstick being a lizard. The internet is a really cool place, you know? I was able to find some very clear cut resources (with examples) that helped me to slap that page together in very little time at all. Knowing that some of you are no doubt interested to know what I found helpful, I’ve included them below.

I’ve added some new features to the site as well! You might have noticed that I’ve reformatted the home page to not look like such a mess as before, and I’ve added a widget that shows my most recent blog posts for anyone who cares to review what goop I’ve typed up. I used a text widget to embed the LizardCam button in a side bar as well, which looks much nicer than it did before.

I also added an About Me page, and I included a whole bunch of images that — I think — tell my story fairly well. At least, I think that they tell my story better than words ever could. I dunno about you, but when I write about my life it always comes across as some sort of encyclopedic entry, and that tires me out after a while. Pictures really convey a lot more than my words ever could.

Now for a little more geeky server stuff. Today in class I learned about something called NTP, which stands for Network Timing Protocol. It’s basically a way to sync up the system time of all of the machines on a particular network. This was of particular interest to me, since I’m running my LizardCam with two separate machines on a single network (the Desktop runs the camera while the Server runs the stream). My problem previously was that these two machines had different system times, and since I’m running cron jobs to make everything work… yeah.

I didn’t really have a reliable way to sync up the system times until tonight, and I jumped at the chance to install NTP on both my desktop and on the server to ensure that they both have the exact same system time. Then I modified my cron jobs to take that into account. The benefit for all of you? Since I can now trust my timing on my cron jobs better than before, I’ve managed to cut down the stream downtime during cycles by a good minute or so. 😀 Now we’ll just have to wait and see if this works out tomorrow…

Time Again for Tech Speak!

Ok, so today I set about trying to figure out how to get a rotating slideshow of desktop images when loading my Linux desktop in XFCE. I had an absolute BITCH of a time trying to find any information on this one that would help me out in getting this set up, but I eventually found a solution that worked! Since there isn’t any information on this elsewhere, I figure I’ll put it here!

For those out there who use the XFCE Desktop for Linux, you may have wanted to set up custom images in your desktop background. You may or may not have noticed that there isn’t an option to rotate the images, even though you have the ability to set up a list of images to choose from. Technically, any time that you restart your computer, an image will be selected randomly from the list. However, there isn’t a way to tell the computer to, say, load up a different image or do a slideshow or something like that.


So, I started to do some research on this, and came across information that indicated that I could set up a cron job that would refresh the environment for me, which would be just like restarting the computer… without having to restart the computer. XD The command runs something like this.

/usr/bin/xfdesktop –reload

And it worked! …..but only when I ran it from the command line. Running it from crontab wouldn’t work. Bummer. So, more research. Come to find out that cron jobs run in a slightly different environment and with slightly different access than the regular ol’ command line. My command turns into this…

DISPLAY=:0.0 /usr/bin/xfdesktop –reload

And THIS works on the command line too. Basically, all this does is tell the command what X display to use when running, but it still wouldn’t work in a cron job. Uggh… and I had such a hard time finding any information on what to do at this point.

I finally figured it out after referring to my notes from the Linux class that I’m taking that indicated you could grant certain access to the X server by using the `xhost` command. Alright, so I give it a try. I added the localhost, my normal user, and my root user to the list of approved hosts, set my crontab, and waited….

xhost +localhost;

xhost +SI:localuser:<username>;

crontab -e

*/5 * * * * DISPLAY=:0.0 /usr/bin/xfdesktop –reload

And it worked! Huzzah! All of the huzzahs! Hope that this proves helpful for any Linux users out there who couldn’t find information on doing this anywhere else. C:

Officially Announcing LizardCam!

Alright, so after day one of trial runs with FFMPEG, the feed, the stream, and the crons that cycle it, I’m pleased to officially announce LizardCam! It’s no secret that I keep a bearded dragon (Chopstick) for a pet, and I’m extremely proud to show her off to the world! She’s kinda the mascot of this website even. You can find a link on my home page to the LizardCam, and I’ll put in a link at the end of this post as well.

So, now to set forth how the LizardCam will work. Since the webcam is technically broadcasting from my bedroom, I’m only going to run the cam while I’m awake. For now, the tentative schedule is going to be 13:00p CDT through to 19:00p CDT every day of the week. Currently, I’ve configured the cam to cycle once every 20 minutes. On the 19th, 39th, and 59th minute of every hour the stream will shut down. On the 1st, 21st, and 41st minute of the hour, the stream will power back on. I’m doing this to make sure that the system remains fresh throughout the long time that I’m going to be keeping it online, and I’m also doing this to test my skills in configuring cron jobs and scripts and what not.

There is always a chance that the cam will go down. Earlier today I was running the cam, and it seemed to work well for a few hours before the computer simply stopped recognizing it. Since I work throughout the day, I wasn’t able to address this problem until I got home later in the evening. If the cam goes down, no worries! I’ll just get home and get to work on resolving the problem.

Also, the cam tends to lag behind real time after a few minutes. If you see that the stream has frozen up, or if you would like to keep apprised of Chopstick’s live movements, just refresh your browser. 🙂

Enjoy the cam everyone! I’m really proud of this one, and I hope that it works well!