How To: Seagate BlackArmor Data Recovery after DataVolume Failed Error

The Seagate BlackArmor 440 is a RAID Capable SMB NAS. Mine consisted of 4 1 TB Seagate drives in a RAID5 array. I highly DON’T recommend it. Check out the posts in the Seagate Community Forums for “Data Volume Usage Failed status”. This error message is what led me from irritation, to denial, to despair, to confusion and finally to triumph. The tl;dr version is that one day for no apparent reason, your RAID5 array with all your data on it will simply show up as FAILED in the BA440 even if all the disks show no problem. Something in the BA440 RAID software has crapped out and the whole point of RAID is now out the window.

Update: I found this quote on the BA’s Amazon review page. “Contacted support and they said “RAID 5 config stores all parity information on drive 1” So yes if you have an issue with Drive 1, your array is useless.

As those of you who use RAID5 know, we are supposed to be able to withstand a complete failure of 1 disk and still have complete data integrity. This is why I bought the BA440. It was used as a data warehouse for old files, especially media files. 95G of media files, 7G of old (but still required) accounting data, 5G of required regulatory documents. All of it was inaccessible. It’s important to note here that not a SINGLE disk showed to have failed. A failed disk would have been (theoretically) easy to fix. This was not that. In other words, this was a huge disaster. I completely agree with the review author’s comment “I have worked in IT for 15 years and I have never come across something quite so stupid.”

If you search the Seagate forums you will find 5 basic solutions.

  1. Upgrade the firmware. If you’re lucky, your array will be working after a reboot.
  2. Restore from a backup. As several forum posts mention, a RAID5 NAS is –used- as a backup in many, many instances, probably the majority of them. And no I did not have a backup of my backup.
  3. Send your drives to Seagate Data Recovery Services where they will be happy to recover the data for you for $3,000 to $20,000 (plus s&h). Uh, no thanks.
  4. Remove the drives, attach them to a windows system and use one of the several NAS recovery software tools. I tried the following.
      • NAS Data Recovery – Runtime Software
      • ReclaiMe Raid Recovery/FileRecovery –
      • UFS Explorer RAID recovery – SysDev Labs

    Of these, the only one that I tried that seemed to work at all was the UFS Explorer. After a 20 hour scan, I was able to view the file structure and see that it would likely be able to recover many files. The trial version only allows viewing/copying of small files. The full version (155 Euros) unlocks larger files. I was able to open some PDFs and small word docs but can’t verify that it would have worked for all files.  Had I just had the basic BA440 setup, I probably would have paid for the full version of UFS and been able to recover most if not all of my data. There was a problem though. The 95G of media files were on an iSCSI drive hosted on the BA440. Using iSCSI allowed me to have a mapped drive letter pointed directly to the BA440 rather than having to deal with a cumbersome webinterface. It also sped up file transfer (which was atrocious even sped up…another reason to avoid the BA series). UFS had no idea what to do with that iSCSI data and couldn’t even see it.

  5. Get your linux-fu going and try to recover the data yourself.

This post is about how to mount your RAID5 array in Debian, create an iSCSI target so that Windows 7 can see it and get all of your data back like a superstar.

From here on this is a AS AT YOUR OWN RISK kind of tutorial. Nothing that follows should damage your data but I’m not a linux genius and you might not be either so you might want to image those disks before proceeding. Search for Clone RAID disk for details on how to do that. FWIW, I didn’t. ALSO, when you following along and the output of your system isn’t what I’m saying it should, I’m not sure I can help. You can ask but as noted, I’m not a linux expert. I’ll also note that what follows is a compilation of bits and pieces for 2 or 3 dozen various linux related websites. One of the frustrating things I found is that most of the sites had steps but didn’t really explain them (so a windows guy could understand at least) I hope that this post avoids that.

No matter if you are using method 4 or 5, the first hurdle is to get your disks attached to a machine so you can access the disks outside the BA enclosure. Remove each disk. Most tutorials will say label it, but I’m not sure it matters as 1) I won’t be using the BA again and 2) I don’t think it’s possible to repair the array without a reformat so the disk order is irrelevant. But labeling takes a second so you might as well.  I was fortunate to have 4 unused eSATA USB docks in the server room. If you don’t have any kicking around, they’re between $50-70 for a dual dock (so $100-ish total for 4 disks). You could also use eSATA cables to you motherboard if you have enough spares or buy an eSATA PCI expansion card. (also about $100 for 4 ports)

Now here, I should note I’m a windows guy and have only been playing with Linux for about 8 months. The command line doesn’t scare me though as I’m old enough to be a DOS guy too. I did happen to have a 10 year old Dell system that I had just installed Debian 7 (wheezy) on to use as a Radius Authentication Server. You can try using a live CD of various *nix distros but I had to install some packages and not sure how well that would work. The other draw back of a Live CD is that if you need to reboot or you lose power you need to re-do everything as nothing is saved. Everybody has a crappy old system stuffed in a closet, get it out & breathe new life into it.

At this point we’ll assume you have Debian 7 running and your 4 drives hooked up via USB. To keep this focused I’m going to assume you have at least –some- linux experience. You don’t need a lot but familiarity with the terminal console and basic how-to-get-things-done is required.

Open a terminal as root. This prevents you from having to sudo every command.

We need to install mdadm (Multi Disk array ADMin)

root@radiusserver: apt-get install mdadm

and the iSCSi target package and associated files

root@radiusserver: apt-get install iscsitarget
root@radiusserver: apt-get install iscsitarget-dkms

Note that the dkms package can take quite a while to install.

Some of the steps require us to make some configuration file edits. You can us vi or nano which are the nix geek method but I prefer a GUI so I use gedit. If you don’t have it you can

root@radiusserver: Apt-get install gedit

Once everything is installed we need to make sure the system can see the RAID drives.

root@radiusserver: cat /proc/mdstat

The output is as follows

Personalities : [raid1]
md3 : inactive sdd4[1](S) sde4[3](S) sdb4[0](S)
2921867757 blocks super 1.2

md2 : active (auto-read-only) raid1 sdb3[0] sde3[3] sdc3[2] sdd3[1]
521408 blocks [4/4] [UUUU]

md1 : active (auto-read-only) raid1 sdb2[0] sde2[3] sdc2[2] sdd2[1]
1044800 blocks [4/4] [UUUU]

md0 : active (auto-read-only) raid1 sdb1[0] sde1[3] sdc1[2] sdd1[1]
1043840 blocks [4/4] [UUUU]

unused devices: <none>

You’ll note that md3 is shown as inactive. These are my 4 RAID drives.

Next we’ll look to see what mdadm can find out about these disks:

root@radiusserver: mdadm --examine /dev/sd[adbe]4

This checks all devices in /dev/sd so /dev/sda4, /dev/sdb4, etc.

It finds the following arrays


Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2dba9164:94ded8f1:1efe9130:56bae5be
Name : 3
Creation Time : Sat Sep 17 07:22:52 2011
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1947911838 (928.84 GiB 997.33 GB)
Array Size : 2921867712 (2786.51 GiB 2991.99 GB)
Used Dev Size : 1947911808 (928.84 GiB 997.33 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b0d40a81:14eada47:e125cd7a:2c7bb3e5
Update Time : Wed Mar  5 10:58:46 2014
Checksum : a17ddcf9 - correct
Events : 84026
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 0
Array State : AA.A ('A' == active, '.' == missing)


Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2dba9164:94ded8f1:1efe9130:56bae5be
Name : 3
Creation Time : Sat Sep 17 07:22:52 2011
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1947911838 (928.84 GiB 997.33 GB)
Array Size : 2921867712 (2786.51 GiB 2991.99 GB)
Used Dev Size : 1947911808 (928.84 GiB 997.33 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 55774217:da69391e:7845a599:a732a26c
Update Time : Wed Mar  5 10:58:46 2014
Checksum : b3dad676 - correct
Events : 84026
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AA.A ('A' == active, '.' == missing)


Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 2dba9164:94ded8f1:1efe9130:56bae5be
Name : 3
Creation Time : Sat Sep 17 07:22:52 2011
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 1947911838 (928.84 GiB 997.33 GB)
Array Size : 2921867712 (2786.51 GiB 2991.99 GB)
Used Dev Size : 1947911808 (928.84 GiB 997.33 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 36b4b98c:90852ea4:17a9a4af:b24e9c94
Update Time : Wed Mar  5 10:58:46 2014
Checksum : ed40aeba - correct
Events : 84026
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : AA.A ('A' == active, '.' == missing)

There’s a couple of things to note here.

  1. Array State : AA.A (‘A’ == active, ‘.’ == missing) – mdadm sees that disk 3 of the array is missing, which as noted above, it is.
  2. Array UUID : 2dba9164:94ded8f1:1efe9130:56bae5be
  3. There are 3 devices found, all 1 TB drives
    • /dev/sdb4
    • /dev/sdd4
    • /dev/sde4

    That’s because I followed Seagate’s data recovery advice and tried to reclaim a disk which made it unusable to my RAID array. Fortunately, unlike the BA440, Linux knows how to rebuild a RAID5 array with one of the 4 disks missing.

  4. State : clean This means that there’s nothing wrong with the array and a good chance we’ll be able to get all the data back.

The rest of the output we can ignore.

Now we have a look at the status of the RAID array

root@radiusserver: cat /proc/mdstat

Personalities : [raid1]
md3 : inactive sdd4[1](S) sde4[3](S) sdb4[0](S)
2921867757 blocks super 1.2

md1 : active (auto-read-only) raid1 sdb2[0] sde2[3] sdc2[2] sdd2[1]
1044800 blocks [4/4] [UUUU]

md3 is our unmounted RAID. You’ll note the devices from mdstat and mdadm –examine commands are the same.

At this point we need to install another package LVM2 (Local Volume Manager) This allows us to read the format of the RAID array that the BA440 uses.

root@radiusserver: apt-get install lvm2

Next thing we need to do is assemble the disk back into an array we can mount and use. We use mdadm and specify the device (from mdstat) and uuid (from mdadm –examine) of the array.

root@radiusserver: mdadm --assemble --force /dev/md3 --uuid=2dba9164:94ded8f1:1efe9130:56bae5be

(note there may be no output here. There should be. See below)

The next 2 commands activate the Volume Group

root@radiusserver: modprobe dm-mod
root@radiusserver: vgchange -ay
No volume groups found

Oops. You may find this error appears after the vgchange command. This means that the OS has control over our device (as happens when you add a USB device like our docking stations). We need to stop the device so we can apply the setting we want. Since /dev/md3 is the device we want to use:

root@radiusserver: mdadm -S /dev/md3
mdadm: stopped /dev/md3

Now if we rerun the –assemble command

root@radiusserver: mdadm --assemble --force /dev/md3 --uuid=2dba9164:94ded8f1:1efe9130:56bae5be

We see the following output:

mdadm: /dev/md3 has been started with 3 drives (out of 4).

Of course since this is -real- RAID5, a missing disk is no big deal.

Now rerun

root@radiusserver: modprobe dm-mod
root@radiusserver: vgchange -ay

We see

1 logical volume(s) in volume group 'vg0' now active

Now we run lvscan to make sure we can see the new LV status

root@radiusserver: lvscan
ACTIVE            '/dev/vg0/lv0' [2.72 TiB] inherit

Now we need to create a Mount Point (basically a directory pointer)

root@radiusserver: mkdir /media/DATAVOLUME

Now we actually mount the volume so we can use it.

root@radiusserver: mount /dev/vg0/lv0 /media/DATAVOLUME

Note: If you get an error when when trying to mount the volume (/dev/lg*/lv* does not exist), you may be running into a block size issue. This is a known issue when running a Live OS version of Debian 7.6 from a USB stick. The Ext4 file system on the BA has a block size of 65536 and the Live OS can only read a block size of 4096. The solution is to run

apt-get install fuseext2

Note #2: Please see the comment from Vlad about using fusext2 in this situation. You may encounter issues when reading large files, such as the iSCSI target LUN. Again this only seems to apply if you are using the Live OS. I did not encounter any issues from the full install.

Then run

fuseext2 -o ro -o sync_read /dev/mapper/vg*-lv* /mnt

(where * is the appropriate volume id)

This seems to be a limitation of the Live OS and not a standard install but since many will try this from a Live OS (especially Windows users), it’s important to know this potential issue.

Thanks to Keaton Forest in the comments for this workaround.
Now list the contents of the mounted directory

root@radiusserver: ls /media/DATAVOLUME

And you should see a list of files on the DATAVOLUME of the RAID array.

aquota.user  iscsi_device  jewab     Public        webserverBackup
Download     IT_Archives   PDFArchives     __raclient  wiki

Now if you start up the Nautilus file browser

root@radiusserver: gksudo Nautilus

and you should be able to browse the “normal” non iSCSI files on the just mounted volume. Copy them to a network drive or local drive other that the RAID array. If you didn’t use iSCSI, you data recovery is complete. Time for a beer.

If you did have iSCSI set up on the BB440, you’ve got more work to do.

First, if you try to access the directory iscsi_device, you will receive a permission denied error. This directory contains the actual iSCSI target (or targets if you have more than one) so we need to take ownership of the directory by running:

root@radiusserver:/etc/iscsi# chown -R <your username> /media/DATAVOLUME/iscsi_device

Now list the contents of that directory

root@radiusserver: ls /media/DATAVOLUME

In my case is have 2 iSCSI targets

iSCSI-1 SQLBackup

I’m only going to recover the data from iSCSI-1 so that’s what I’ll focus on from now on. If you need to recover multiple devices, you would just repeat these steps

Now run

root@radiusserver: sudo gedit /etc/default/iscsitarget

This is likely empty. Add the following line to make enable the system to create an iSCSI target:


Next edit the iSCSI target configuration file

root@radiusserver: sudo gedit /etc/iet/ietd.conf

Don’t modify anything except add the following to the bottom of the file

Target iqn.2014-03.local.mynet:storage.Lun0
       Lun 0 path=/media/DATAVOLUME/iscsi_device/iSCSI-1,TYPE=fileio

This defines the Target name which is the iqn line. This is what will show up in the Discovered Targets window of the iSCSI Initiator in Windows. The second line defines the LUN (Logical Unit Number), giving it a number (0), the full path to the directory (remember that the path is CaseSensitive! That messed me up for a while) and specifies the type (fileio). FileIO is used for directories, there other types but we don’t need them.

Note: If you want to set up a second or more Target you would change the target id, Lun and path

Target iqn.2014-03.local.mynet:storage.Lun1
       Lun 1 path=/media/DATAVOLUME/iscsi_device/SQLBackup,TYPE=fileio

Now we need to allow connections to the target

root@radiusserver: sudo gedit /etc/initiators.allow

and add the following to the bottom of the file (it may be empty) iqn.2014-03.local.mynet:storage.Lun0

Now all we need to do is restart the iSCSI service:

root@radiusserver:/etc/iscsi# sudo service iscsitarget restart
[ ok ] Removing iSCSI enterprise target devices: :.
[ ok ] Stopping iSCSI enterprise target service: :.
[ ok ] Removing iSCSI enterprise target modules: :.
[ ok ] Starting iSCSI enterprise target service:.
. ok

Now we can check to make sure the target is running and available

root@radiusserver:/etc/iscsi# iscsiadm -m discovery -t st -p

(change the ip above to your local ip address)

Then we see our target,1 iqn.2014-03.local.mynet:storage.Lun0

Note here. If your Debian system has a firewall set up (iptables) you will need to allow access for port 3260. If you’ve set this up for recovery, the default Debian install does NOT use a firewall so you shouldn’t need to do anything.

Now all we need to do is head back to our windows machine and fire up the iSCSI initiator as admin. Enter the ip of the target (in the example and hit quick connect. Your target should show up in the Discovered targets section. Click the Volumes and Devices tab and make sure there is something listed in the Volume/Mount point/device section. If not, click Auto Configure. Close the iSCSI tool and head to Explorer. You should see a new drive letter which points to the iSCSi target and you should now be able to move your files to a network or local drive.

Hey, you’re done! That calls for a celebration.

What next?

Once your data is moved off the RAID drives (and verified good!), you could reinstall them in the BA device, reformat and copy the data back, under the hopes that this won’t happen again. Not for me sorry. Fool me once…yada yada.. (of course we now know how to fix it if it happens again…)

You could void your warranty and install a version of linux to replace the OS. Search for “install linux blackarmor” for options. Note the possibility of bricking the unit is reasonably high. I might try this as I certainly won’t be going back to using the BA’s native software. It’s why I’m writing this.

Lastly, you could tuck the debian system you just built with the 4 external drives in a closet or server room and use that as your RAID5 file storage. It’s probably what I’m going to do. NOTE: I’ve added a second post outlining how to do this.

I hope this helps some of you to recover your data and saves you some grey hair!


When ADODB.Recordset variable not defined isn’t due to a missing reference.

Had a really obscure issue this morning that will probably never happen to anyone else but if it does here you go….

I was trying to compile an older access 2003 ADP that we hadn’t compiled for quite a while and it was failing. I opened up the code editor and ran Debug > Compile.

which resulted in Compile Error: Variable Not Defined on the following line



Now normally this is a result of a missing reference. Code Editor > Tools > References

The missing reference for an error on ADODB would be Microsoft ActiveX Data Objects 2.x Library but my application has that reference. Hmmm.

I spent some time adding & removing and changing versions and still nothing. I’m sitting staring at the screen, coffee in hand, when I noticed something.

Can you see it?






Those quotes are wrong. The VBA code editor is plain text. The quotes in the SET statement are extended character quotes. Not sure how they got there. My guess is that when we had to switch CreateObject due to Windows 7 SP1 , the code was emailed in Outlook, copied from a wordpress blog (which likes these types of quotes) or passed around the office in a Word document.

A simple search & replace of the offending quotes with standard quotes resolved the issue.





The Mysterious Spy.log – Coldfusion & JDBCSpy

A couple of week ago, I found a file on my internal web server called spy.log, which gave me a bit of a scare (but then I realized nobody spying on me is likely to call a file spy.log) This file was almost 20G in size. What’s up with that? After finding a viewer to open a 20G text file, I determined that this was a legitimate file belonging to JDBCSpy, which is an extension of the Coldfusion JDBC driver which can optionally be enabled. To enable it, you just have to add a reference to the Coldfusion Datasource Connection String in the Advanced section.


Thing is I don’t remember enabling it.  I found some more information on Charlie Areharts’ blog which reminded me that at one point I had installed a demo version of Fusion Reactor to diagnose some performance issues. I’m not sure if the version of FR I installed modified the connection string and did not remove it when I uninstalled the program, or if I added it (note to self: make better changelog notes please).

In any case, the fix was simple, delete the connection string attribute and restart the CF services.




A Simple Way to Clean Up Your InBox

I don’t know about you, but I manage 3 different email accounts at work. These accounts have been around for a very long time (+10 years) and over this time they’ve have showed up on many, many email marketing lists. One account was getting 30-50 emails per day from marketers, auto-added newsletters and other sundry sources. Every day I would dutifully open up my email client, then play the delete game. Then one morning last month I had a revelation. I don’t read any of this stuff. Most of these emails are coming from legitimate sources. Many use SafeUnsubscribe or similar services.


Wow. What a concept. Why did it take me so long to figure this out? A couple of reasons I guess. Habit & Creep. Back in the bad old days clicking on an unsubscribe link often was simply a way for spammers to verify an address so I got in the habit of simply trashing everything. (And I don’t make a habit of clicking on links in emails – we all know that’s a really bad idea right???) Since most of the emails I was getting were from obviously legitimate sources using services like Constant Contact (who provide the SafeUnsub system), I could be assured that I could with a careful click (hovering over the link, verifying that it was pointing to a site I expected, etc) I could rid myself of these emails. Aside from habit, creep is the other reason. A newsletter here, a weekly sales blast there and suddenly you’ve got 50 marketing emails a day.

So there you go. Make a resolution for 2012 to clean up your inbox. It’s dead easy. And my InBox – down to 4 marketing emails today (unSUBBED!) and 6 from friendly suppliers in China – which I guess I’m stuck with.

Project Honeypot & Coldfusion Part 2

One of my more popular posts has been Stopping Comment Spammers & Email Harvesters with Coldfusion & Project Honeypot. This code has been working very well for me and I have seen a noticeable decrease in comment spam. It also seems to be working for Project Honeypot, at least in a small way.

My Stats

  • Harvester visits to your site(s): 42
  • Recent visits (this week): 3
  • Recent visits (this month): 9
  • Spam traps issued on your sites: 304
  • Spam received at your addresses: 1,089
  • Received this week: 112
  • Received this month: 417
  • Comment spam posts to your site(s): 0

A code update.

One of the things I noticed since implementing the code in my previous post, my site page load times were up quite a bit. The reason is that the code uses http:Bl to do a DNS look up to the project servers for every page load. This takes -time-. I decided to add my own white list table and some code to eliminate these multiple look-ups.

The table is simple, just

visitor_ip_addys [varchar(15)]
visitdate [datetime]

I added the following function to my Honeypot CFC

<cffunction name="newVisitorCheck" returntype="string">
   <cfargument name="ip" required="yes" type="string">
   <cfset var vQry = "">

  <cfquery name="vQry" datasource="myDSN">
    select ipaddy from visitor_ip_addys where ipaddy = <cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.ip#">

 <cfif vQry.recordcount eq 0><!--- then it's a new visitor  --->
   <cfset result = "new">
   <cfset result ="existing">
<cfreturn result>

And changed my honeyPotCheck function to

<cffunction name="honeypotcheck" returntype="struct" hint="Check Project HoneyPot http:BL">
  <cfargument name="ip" required="yes" type="string">
  <cfset var aVal = "">
  <cfset var hpkey = "MyKey">
  <cfset var stRet = structNew()>

<!---jb: added check to see if this ip has visited in the last 3 months. We have a table to track ips which is retained for 3 months. IP's that check as clean
against http:BL are added to this table to increase page load performance. The table is cleared every 3 months to revalidate visitors (in case they may have been
compromised in that time and to keep table size reasonable --->

<cfinvoke method="newVisitorCheck" returnvariable="result">
<cfinvokeargument name="ip" value="#arguments.ip#">

<cfif result eq "new">
  <!--- Get the different IP values --->
  <cfset aVal = listToArray(gethostaddress("#hpkey#.#reverseip(arguments.ip)"),".")>

        <cfif aVal[1] eq "IP-Address not known"><!--- jb: added evaluation of array for good addresses --->
        <!--- set a value indicating ok address --->
            <cfset stRet = {type=99}>
            <!--- insert into visitor_ip_addys table as this is a clean IP --->

            <cfquery name="iQry" datasource="MyDSN">
            insert into visitor_ip_addys (ipaddy, visitdate) values
            (<cfqueryparam cfsqltype="cf_sql_varchar" value="#arguments.ip#">,
            <cfqueryparam cfsqltype="cf_sql_timestamp" value="#now()#"> )

          <!--- there was a match so set the return values --->
          <cfset stRet.days = aVal[2]>
          <cfset stRet.threat = aVal[3]>
          <cfset stRet.type = aVal[4]>

          <!--- Get the HP info message ie: threat level --->
          <cfswitch expression="#aVal[4]#">
           <cfcase value="0">
            <cfset stRet.message = "Search Engine (0)">
           <cfcase value="1">
            <cfset stRet.message = "Suspicious (1)">
           <cfcase value="2">
            <cfset stRet.message = "Harvester (2)">
           <cfcase value="3">
            <cfset stRet.message = "Suspicious & Harvester (1+2)">
           <cfcase value="4">
            <cfset stRet.message = "Comment Spammer (4)">
           <cfcase value="5">
            <cfset stRet.message = "Suspicious & Comment Spammer (1+4)">
           <cfcase value="6">
            <cfset stRet.message = "Harvester & Comment Spammer (2+4)">
           <cfcase value="7">
            <cfset stRet.message = "Suspicious & Harvester & Comment Spammer (1+2+4)">
          <!---  <cfdefaultcase> jb: moved to top of function as we can't eval the array if there is no lookup response ie: not match in http:BL
            <cfset stRet.message = "IP-Address not known">
           </cfdefaultcase> --->

    <!--- good address  --->
    <cfset stRet = {type=99}>
  <cfreturn stRet>

As you can see from the comments in the code, I do the look-up (newVisitorCheck) when honeypotcheck is invoked, which is on each page load. The check does a query to see if that IP is in our white list table. If it is, then we skip the rest of the check and do not do a http:Bl DNS query. If it does not exist in our white list, that either means that the IP is new so we need to check it, or that it is a known bad IP. This means that new visitors have a slightly longer wait on first page load as we are doing the look-up, but then if they pass the look-up, we add them to the white list* and do not slow them down for subsequent page loads. As noted in the comments, we keep entries in the white list for 3 months (an arbitrary number).

After 3 months, we remove the IP from the white list so we can recheck it to make sure the IP hasn’t been compromised.

The code to do this is:

<cffunction name="ipTableCleanup" access="Remote">
<cfquery name="deleteIP" datasource="myDSN">
    delete from visitor_ip_addys where visitdate <= DATE_ADD(CURRENT_TIMESTAMP, INTERVAL -90 day)

This is run every day via a schedule task set up in CFAdmin.

All in all, this seems to be working quite well as page load times are back to where they were before the Honeypot implementation and the Honeypot is still doing its job.

*Note that since you are capturing & storing IP addresses, your privacy policy should reflect this fact.

QR Code Generator Update

I’ve just found a bug in the way I wrote the original code for my QR Code Generator

Update 2: Thanks to a comment from Michael, I’ve found another way to fix the issue by adding encodeURIComponent to the JS function. See the comments for details. Now you have 2 ways to do it 🙂

The problem is that by using the url scope you can only get one url parameter to pass to the Chart API from the variable #url.siteurl#.

If you enter

as the input value of the text box, the value of url.siteurl becomes and then CF creates -another- URL variable called bar with a value of 2.

This had me stumped for a while. but what is happening is that the url being passed as


What we get when we do a <cfdump var=”#url#”> is

param1 (siteurl) =

param2 (bar) = string2

See what happened there? Everything after the first ? (dsp_qrcodeGen.cfm?) up to the first & (&bar) becomes a param pair and then everything after the & becomes a pair. Problem is we don’t -want- to pass 2 url params. as we’d have to handle them by doing something like

<cfhttp url="" result="qrcode" getasbinary="yes">

This gives us the proper string to pass but it requires us to hand code the cfhttp call making it very inflexible.  It becomes even more difficult as we need to pass additional params. You could probably code up a parsing loop of some kind but there is a much simpler method.

CF does not  parse form variables in the same way it does URL params, so by using the form scope, CF doesn’t break apart the string that we feed it.

<div style="margin:auto; width: 700px; height:450px;padding:25px;text-align:center;border:1px solid;">
    <form method="post" action="dsp_qrcodegen.cfm">
        <h3>QR Code Generator</h3>
        Input URL
        <input type="text" name="siteurl" id="siteurl" style="width:500px;margin:50px 0 50px 0;"><br>
        <input type="submit">
    <cfif structkeyexists(form, "siteurl")>
        <div style="margin:auto;">
        <cfhttp url="|0" result="qrcode" getasbinary="yes">
        <cfimage action="writeToBrowser"  
                 source="#qrcode.filecontent#" />

#form.siteurl# stays siteurl= so all our params get passed and you can add as many additional params as you like.

Windows 7 regional settings & Microsoft Access Errors

We’ve recently been swapping out our old XP machines for new Win7 machines and for the most part things have been pretty smooth (except you HP1020 printer – yes I’m looking at you). However, we did start to run into to some unexplained weirdness. We run an in-house order system built on an Access ADE/Access 2003 Runtime front end & a MSSQL backend. With the latest couple of new machines we started to see some errors, specifically when a user tried to use the MSCAL.OCX Datepicker. (it turns out we hadn’t run into the error before as the new machines were going to non-orderdesk people who didn’t use mscal) Now because users have Runtime & not a full version of Access, debugging these kinds of errors can be a challenge. I have lots of validation & error handling built in for user input issues but Runtime does not provide meaningful error messages on its own so when you run into a system related error, you just get a generic error message (yes there probably are ways to handle those kinds of errors too but not in -my- apps).

Things became even stranger as we found that User A had the error but when User B logged on to the same machine, they were able to use the app just fine. My initial thought was that it was a permissions problem for User A. We checked folder permissions for our app folder and everything seemed fine. We also checked to make sure both users had perms to the Access Program folder (where the OCX resides) and that checked out OK as well. I was tied up with some things so I had my assistant investigate a bit more. After some muttering & swearing (I may be projecting here), he returned to my office and said triumphantly, “Regional settings!”

For some reason User A had regional settings that were different than User B (who had the correct setup) Calendar control didn’t know what to do with the format it was being given so it threw an error.

Firstly, it’s awesome that “my guy” figured this one. In a very long series of assistants, he is the first one even remotely capable of the kind of thinking that finds these kinds of solutions. If you’ve ever uttered the words, “never mind, I’ll fix it myself”, you know what I’m talking about.

Secondly, I ran into this exact error when we made the switch from win200 to XPpro years ago but I didn’t make note of it. Now I have and I’ll be able to find a solution when we make the switch from win7 to win10 in 8 years time. :0