XBMC on Dell Inspiron 400 (Zino HD)

Of late, I have been trying to wean myself off Windows systems. First my Laptop, then my desktop and now my HTPC unit. Linux systems just feel snappier and they have come a really long way in terms of “just working” on common hardware.
Yesterday I did full volume install of XBMCbuntu (“Gotham”), wiping out the old Windows 7 OS installed on the unit. Once again, the results were extremely impressive, and reinforced my belief that Linux systems (esp ubuntu/debian based distros) are here to stay. Based on my experience, I can confidently say that unless you have archaic hardware, Linux should provide a much better experience than Windows.

Here is the config of my Zino HD HTPC (purchased in 2010):

1.5GHz AMD Athlon X2 3250e
256MB ATI Radeon HD 3200 (integrated ATI graphics chip)
250GB, 7,200 rpm HDD
DVD burner
Gigabit Ethernet
Dell Wireless-N WLAN 1520 Half MiniCard

Lenovo Wireless Multimedia Remote (A fully working remote when I tried the live cd was a welcome surprise:-)

Connected to a 40 Inch LCD TV (Sony Bravia KDL-40S4100) via HDMI.

The setup itself was super smooth – pop the installation media, select install, standard Ubuntu installation steps. ALL hardware was detected and fully functional right after the install and reboot. The system takes about 3.5 GB of disk space. Suspend, Hibernate, Shutdown work as expected. The UI is very intuitive and responsive, and I definitely consider it an upgrade over Windows Media Center!

To make the UI align perfectly on your display, there is a setting (they really have thought of everything !)

System->settings->system->Video Calibration

If your TV is like mine, you will not see the corner markers shown at http://wiki.xbmc.org/index.php?title=Calibration#Video_calibration… (Pay close attention to the top left and bottom right screen markers in the images – You want them to perfectly align with the corners of your TV screen). Move your mouse cursor to the top left – the caption will change to indicate that you are adjusting the top left corner, then use cursor arrows to move the markers into position. Repeat for bottom right corner, and subtitles.

I tried a bunch of themes (after installing the Fusion repository), however none were as sleek and well rounded as the “Confluence” theme installed by default. The linux open source driver (Gallium 0.4) DOES NOT support hardware decoding (yet). However the CPU seems perfectly capable of rendering videos smoothly – The max load I noticed using the video overlay option was each core at 50% while playing some videos on my NAS).
I am quite happy with 720P video playback as my internet speed is definitely a bottleneck for streaming 1080P videos.


My Linux Environment Setup

Install tlp to manage battery (gave me an extra 20 mins or so. Also laptop runs cooler):

sudo add-apt-repository ppa:linrunner/tlp
sudo apt-get update
sudo apt-get install tlp tlp-rdw
sudo tlp start

Linux Mint makes setting defaults a snap via the “Preferred Applications” dialog box (Under Preferences)

Screenshot from 2014-07-13 04:24:19






Install awesome themes – My favorite is “Metro”.

Set fonts to work for your resolution. This is what my “Fonts” screen looks like (Works great for 1920X1080 15 Inch screen)

Screenshot from 2014-07-13 07:16:38






PHP Development/Laravel related stuff

Install Virtualbox:

sudo apt-get install virtualbox-dkms
sudo apt-get install virtualbox virtualbox-qt virtualbox-guest-additions-iso

Install Vagrant:

Download the latest deb file from vagrant (The default packaged version is outdated and not suitable for installing homestead)

Download and install PHPStorm – My preferred IDE for PHP development

Install Sublime text – an awesome text editor (also check my commonly used plugins)

sudo apt-get install sublime-text

install Git – Required for cloning various repos
Install php-cli – This is required for composer install via phpstorm

sudo apt-get install php-cli

Install composer (globally)– This is required for composer install via phpstorm

Other than the software listed above, I install all other programming environment related utilities within the Vagrant box.

Dell Latitude E6510 & Linux Mint 17 Cinnamon 64 bit (Qiana)

Finally, A version of Linux that is on par with Windows in terms of ease of installation and use! I have tried numerous times in the past to get a “fully” functioning Linux install on my (various) computers; However, I was never completely satisfied – It would invariably be lacking support for some hardware component or the other.

Yesterday, I finally succeeded in installing a fully functional, robust Linux environment on my main work laptop (Dell Latitude E6510) with Linux Mint 17 Cinnamon (Qiana).

Here are my laptop specs:

Screenshot from 2014-07-12 18:02:39





I am happy to report that ALL hardware components work as expected. Most notably, these nagging problem areas that I faced with Linux distributions in the past (almost all) seem to be resolved – I have tried various flavors of Ubuntu, LMDE and Fedora:

  • Wireless
  • Suspend/Hibernate/Shutdown support – Including Lid close options
  • Backlit keyboard
  • Touchpad (vertical and horizontal edge scrolling)
  • Support for all “FN” combinations on keyboard (increasing brightness, volume etc)
  • Keyboard pointer (located in middle of keyboard)
  • High resolution support(1920*1080)
  • Inbuilt webcam

Why I decided to switch (from Win to Linux) for my Development box:

Pretty much all the tools I commonly use for programming are built primarily for the Linux environment (for the most part windows alternatives are secondary) For example: Git, Virtualbox, Vagrant, all LAMP related software such as composer, PHPUnit, etc. Getting Node.js to work properly on Windows is nightmarish. Not to mention the hurdles one has to cross when you are using an ordinary user account. Modern web-based programming workflows requiring the use of grunt/gulp/yo is so much easier on Linux. So, it just makes more sense to use a pure Linux box for (web) development. Additionally, using the Ubuntu/Mint package manager for various requirements is so much more convenient than downloading and installing software in windows. So far I have been using Vagrant in Windows;however, the system was never as stable as I would like (requiring frequent box rebuilds, lack of symlink support etc.)

The Linux software environment is unbelievably efficient. I can run the entire Linux mint OS (bundled with LibreOffice, Firefox, VLC all the default stuff), Vagrant box (Homestead – complete with webserver, database server etc), development environment with PHPstorm, Sublime, Java etc and it takes less than about 8 GB of space! The equivalent windows software requirement exceeds 60GB. Also, I have rarely seen the RAM use exceed 2 GB.

I did run into a couple of issues along the way:

1. Immediately after I installed Linux mint Cinnamon, I found that GRUB (boot-loader) was not created. So, the system booted directly into Linux. The Windows partition was intact. For some reason, it was not visible to Linux Mint. The fix was quite easy – Install the Boot Repair utility – https://help.ubuntu.com/community/Boot-Repair

2. Driver for Nvidia NVS3100 graphics card: The default Nouveau driver caused the system to hang on resume from suspend and hibernate. To fix this, I used the driver manager utility to install the recommended Nvidia driver (Version 331). After about a day of use I noted random system hangs (with gray blank screen) and figured it must be the Nvidia driver because Noveau did not cause this behaviour. I then installed an earlier version (304), and it has been stable as a rock. The suspend and hibernate functionality work perfectly with this version of the video driver. So much so that I have stopped doing a full shutdown – resume from suspend takes about 2 seconds and resume from hibernate takes around 6 seconds!

3. The default icon/font rendering was tiny on my hi-res monitor(1920X1080). To fix this, I set font scaling to 1.5, installed default-zoom extension in Mozilla to scale up to 150%.

So far, I am super impressed with the level of finesse offered by a free OS. I hope to make the Linux partition my primary development workspace.

Using Rocketeer for Easy Deployment to VPS

In this blog, I will be referring to Laravel deployments in particular; however, Rocketeer was designed to be framework agnostic and so the general principles should be transferrable to any deployment scenario.

The primary goal of Rocketeer is to interface with your source code manager (SCM) and transfer code (from your SCM) to your deployment folder. It is important that you understand this – Rocketeer does NOT transfer code directly via ssh/scp. You MUST use an SCM (git/cvn). In the Laravel context, It also performs other tasks like running migrations, installing composer dependencies etc.

The flexibility provided by Rocketeer is particularly enticing in a VPS scenario. There are a couple of gotchas that you need to be aware of especially when using your own VPS as a github repo. Listed below are the steps I take to deploy Laravel apps to my VPS.

Step 1: Prep the laravel app under question – Include the dependency on rocketeer, add the provider and finally add and commit all code to your local git repository.

Step 2: Prep the remote VPS:

a. I will proceed with the assumption that you have hardened your VPS along the same lines as outlined in this great post by Bryan Kennedy. This means that you will ssh into your box using a predefined username and a key file (as against root+password).

b. From following step1, you must have added your local box ssh key to your hosts “.ssh/authorized_keys” file. Since Rocketeer will attempt to open an ssh channel to do a git clone, you must ADDITIONALLY generate a public key for your server and add it into the authorized_keys file. This is easily done by executing the following in your host:

ssh-keygen -t rsa -C "email@domain.com"

Accept the defaults. Then copy the contents of the generated .ssh/id_rsa.pub to your .ssh/authorized_keys file.

If the above is not properly done, the error message thrown by Rocketeer during the deploy process is:

Unable to clone the repository
Cloning into '/var/www/project1/releases/20140330095757'...
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
Deployment was canceled by task "Deploy"
Execution time: 3.492s

c. Insure you have composer (globally) and git installed on your vps

d. Set up a directory structure convention for your projects (you will do this for each new project). Let’s say

  • Your project files get deployed to /var/www/project1
  • Your git repo is at /var/git/project1.git

Important: Since files will be deployed using the username you use to ssh, please also make sure that the username is in www-data group,and the www-data group has rwx permissions on /var/www. Do the same for the git folder as well.

e. Setup a git repository on your vps (/var/git/project1.git) and set the remote server option in your local repo

cd /var/git/project1.git
git init

on your local repo:

$ git remote add origin ssh://username@my.server.org/var/git/project1.git

Note how I specified my VPS hosted git repository. ‘username’ is the username that has ssh permissions to my vps. Rocketeer will login as this username and attempt to do a git clone in the deployment directory.

Run the following command to ensure that the remote server is properly set:

$ git remote -v
origin  ssh://username@my.server.org/var/git/project1.git (fetch)
origin  ssh://username@my.server.org/var/git/project1.git (push)

Note that you can use an IP address if you do not have a hostname associated with your VPS. After the remote server is set, you should be able to do:

$git push origin master

to push your code to your server repository.

Step 3: You are now ready to deploy apps to your vps using Rocketeer. Rocketeer uses the new Laravel 4.1 RemoteManager component (https://github.com/illuminate/remote). This requires a more recent version version of PHP.

Fill in the appropriate details in app/config/remote.php (Rocketeer pulls information from this config file into its own config.php file)

'connections' => array(
    'production' => array(
    'host'      => 'my.server.org',
    'username'  => ‘username’,
    'password'  => '',
    'key'       => '/home/vagrant/.ssh/id_rsa',
    'keyphrase' => '',
    'root'      => '/var/www',

On running php artisan, you should see the following Rocketeer specific options like so:

 deploy:check Check if the server is ready to receive the application
 deploy:cleanup Clean up old releases from the server.
 deploy:current Display what the current release is
 deploy:deploy Deploy the website.
 deploy:flush Flushes Rocketeer's cache of credentials
 deploy:ignite Creates Rocketeer's configuration
 deploy:rollback Rollback to the previous release, or to a specific one
 deploy:setup Set up the remote server for deployment
 deploy:teardown Remove the remote applications and existing caches
 deploy:test Run the tests on the server and displays the output
 deploy:update Update the remote server without doing a new release.

Start off the process (this is a once per project setup of config options) by typing

$ php artisan deploy:ignite
No repository is set for the repository, please provide one :ssh://username@my.server.org/var/git/project1.git
Configuration published for package: anahkiasen/rocketeer
What is your application's name ? (project1)
The Rocketeer configuration was created at anahkiasen/rocketeer
Execution time: 7.4155s

Go through the \config\packages\anahkiasen\rocketeer\remote.php file and ensure that the settings are correct.

I had to make a couple of changes:

'root_directory'   => '/home/www/', to
'root_directory'   => '/var/www/',

Also I commented out the ‘composer self-update’ task (this is because I have composer installed globally in the /usr/local/bin directory and the username that rocketeer uses to ssh does not have appropriate permissions to the folder for the composer self-update process)

// The process that will be executed by Composer
 'composer' => function ($task) {
 return array(
 $task->composer('install --no-interaction --no-dev --prefer-dist'),

Run the deploy:check command to verify that you are good to go.

$ php artisan deploy:check
Checking presence of git
Checking presence of Composer
Checking presence of mcrypt extension
Checking presence of mysql extension
Checking presence of pdo_mysql extension
Your server is ready to deploy
Execution time: 3.1507s

If any deficiencies are noted, please fix them prior to proceeding.

 Once you are ready to deploy. Type in

$ php artisan deploy


$ php artisan deploy:deploy

If all goes well, you should get a success message from Rocketeer and a copy of your code on the host VPS.

To troubleshoot the deploy process, type in

$ php artisan deploy --verbose

The –verbose switch will display a wealth of information to help you debug the source of the error.

We have barely scratched the surface of Rocketeer (although it covers the most common use-case). Be sure to read up the wiki to understand what exactly happens during a “deploy” and to implement more intricate deployment scenarios.

On a related note, I would also like to mention that git deployments can be accomplished using git post-receive hooks. There is a great tutorial explaining the process at digital ocean. Personally I prefer the Rocketeer way as it is easier to implement and also offers a simple version switching mechanism.


Laravel Hash::make() explained

First, let us run through a couple of observations in Laravel 4:

return Hash::make('test');
return Hash::make('test');
return Hash::make('test', array('rounds'=>12));

These are the results returned for me.. of course, you will get different results. But the takeaway points are:

1. Hash make returns a different hash each time. This is quite curious.
2. The output is always a 60 char string
3, The initial characters of the hash are metadata (first 7 chars)

This blog post will attempt to demystify some of the inner workings that cause Hash::make() to behave this way. So, how does Laravel do this? how then is the password check performed? and finally what is the advantage that this offers?


Internally, Hash::make() encrypts using the bcrypt function and Blowfish algorithm. For php>5.5, password_hash() and password_verify() functions are used. For previous php versions, a compatibility library irc_maxell/password_compat is pulled in by composer. In fact, reviewing the source code of the password_compat library provides a lot of insight into the inner workings of password_hash() and password_verify().

According to the php documentation (http://www.php.net/manual/en/function.crypt.php), “Blowfish hashing with a salt as follows: “$2a$”, “$2x$” or “$2y$”, a two digit cost parameter, “$”, and 22 characters from the alphabet “./0-9A-Za-z”.”

So, in our trial run 1 above,
$2y$ represents use of blowfish algorithm with salt
10 is the default “cost” factor
A 22 character “salt” is (randomly) generated and appended to the previous two components
this is followed by the encrypted password.

The php crypt function (that is internally used to implement bcrypt) is then called :

crypt($password, $hash);

where $password is the string to be hashed, and $hash is the concatenated value of “$2y$”.”10″.”22 random characters from 0-9, a-z, A-Z”.”$”. This function returns the 60 character hash string associated with the password.

The cleverness of this is that the algorithm, salt and cost are embedded into the hash and so can be easily parsed out into  individual components for reconstruction/verification (Please see relevant sections of the php crypt source code at https://github.com/php/php-src/blob/master/ext/standard/crypt.c#L258). Because of this, you don’t need to store the salt/cost separately in a database table.

Password check

For checking the password (wrapped by password_verify() for php>5.5), an internal function semantically equivalent to :

return crypt($password, $hash)==$hash;

is used. The original hash that was generated by the encryption is passed to the function (this is key). The supplied password is salted and run through the crypt function to generate the original hash (provided the same password is used). Note that internally, the crypt function only cares about the first 29 characters of the passed in hash (7 metadata+22 salt). Remember that the crypt function implements a one-way hash – there is no way to retrieve the password from the encrypted hash. The only way to verify password equivalence is to hash it using the same salt and compare the results. Both the Hash::check() and Auth::attempt() methods in Laravel run the same check.


The conventional method of using a md5 or sha1 to generate password hashes is insufficient for modern security requirements. Due to the advancement in computation power, it has become trivial to use a Rainbow Table (http://en.wikipedia.org/wiki/Rainbow_table) to crack passwords stored using md5/sha1 hash. The use of bcrypt function avoids this vulnerability. So, you now have a one-way hash function that is both secure and easy to implement.

Not Yet Ready for Ghost

There has been quite a lot spoken and written about the new blogging platform that is supposedly poised to revolutionize online writing in general. I took time to play with Ghost  this weekend, and here are my thoughts regarding why I will NOT be moving to it anytime soon.

According to Ghost’s sales pitch, their Raison d’être are the following:

1. Use of Markdown

2. Instant feedback regarding look and feel using a split screen

3. Futuristic dashboard

Here are some of my counter arguments (and reasons why I prefer to continue with my wordpress blog):

1. I use Markdown for GitHub because I am forced to.. it is not exactly a walk in the park to get things to look the way you want it to! For plain text, Yes. But if you are doing non-trivial layouts (images even), the markdown syntax can slow you down. It is a new syntax and learning (and unlearning) is involved (fortunately though, Ghost also allows for html). Still, expecting everyone to use markup (or html) to compose blogs is a tall order!

2. Windows live writer: I am a die-hard fan of WLW, and have been for the past 8-10 years. Currently, WLW cannot talk with Ghost and that is a big issue for me.

3. Offline access: You need to be online for the Ghost live preview function to work. Otherwise, you will just end up typing markdown in a notepad like environment. WLW mentioned above solves the offline problem nicely. What’s more, you can even download your blog’s wordpress theme into WLW and view a preview exactly like it would look online. In fact, almost ALL my blogs are composed offline on WLW and then published.

4. Free vs Paid: There is currently no free offering for ghost blogs. Most folks currently using ghost are either on their personal VPS or on some paid ghost hosting provider. Note that there is no SEO optimization or backup or spam block or CDN speedup or any of that good stuff when you use a personal VPS for setting up your blog (all of which come standard with say a free blog hosted on wordpress).

5. Comments: In my opinion, online blogging as a medium gained popularity because of the social aspect – so people may comment, react and question your thoughts and ideas. The advice seems to be to integrate disqus or facebook for comments. I do not agree with that line of reasoning.

6. Dashboard: Yeah the Ghost dashboard is indeed yummy. But I also happen to like how WordPress does it. The global map displaying regions of access and common search terms used to reach me are all I usually look at.

Bottom-line: This blog uses a dead simple theme (suits) – All sidebars and footers are collapsed to boost real-estate available for content. Well received blogs authored by me usually get promoted to the first or second page in a google search (which is what really matters – there is no point expending resources towards blogging if your content is not reachable). So, if simplicity is your goal choose a simple theme on a platform that allows you the flexibility to grow.

I still love blogging using my Live Writer client, and for me, wordpress is still “just a blogging platform”.

PS: There is a new plugin named Gust (still in active development) that replicates a Ghost look and feel in WordPress. Still if you concur with my observations above, this should not enthuse you too much!

Nginx config for hosting multiple projects in sibling folders

I have recently begun the process of migrating (laravel) apps into using Nginx+PHP-FPM.

Needless to say, it is definitely more challenging than configuring good ol Apache! For starters, you absolutely must know regular expressions fairly well. Nginx makes extensive use of regexes to figure out matching rules for urls. The second paradigm shift is learning that Nginx does configs on a “per-application” basis. What I mean is, you cannot just setup the server one time, and expect to drop in apps into the web folder, and hope things work (this is especially true for apps that use clean urls via htaccess rewrites). So, prior to setting up any web application, expect to work a bit on tweaking the Nginix config. The reward is a nimbler web server that performs much better under load.

I put together a config file that serves multiple Laravel applications stored in sibling folders (on a single server). So, will serve up application 1, and will serve up application 2.

/Vagrant is the root web folder, and /project1 and /project2 are sibling folders within the vagrant folder containing full laravel applications.

server {
listen 80;
root /vagrant;
index index.html index.htm index.php app.php app_dev.php;
# Make site accessible from http://set-ip-address.xip.io
access_log /var/log/nginx/vagrant.com-access.log;
error_log /var/log/nginx/vagrant.com-error.log error;
charset utf-8;
# handle static files within project.. break at end to avoid recursive redirect
location ~project(\d*)/((.*)\.(?:css|cur|js|jpg|jpeg|gif|htc|ico|png|html|xml))$ {
rewrite project(\d*)/((.*)\.(?:css|cur|js|jpg|jpeg|gif|htc|ico|png|html|xml))$ /project$1/public/$2 break;
#project1 and project2 are two laravel projects that you want to serve
# at and respectively
location /project1{
rewrite ^/project1/(.*)$ /project1/public/index.php?$1 last;
location /project2 {
rewrite ^/project2/(.*)$ /project2/public/index.php?$1 last;
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
# pass the PHP scripts to php5-fpm
# Note: \.php$ is susceptible to file upload attacks
# Consider using: "location ~ ^/(index|app|app_dev|config)\.php(/|$) {"
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
############# IMPORTANT - This section adjusts the request URI sent to laravel #################
set $laravel_uri $request_uri;
if ($laravel_uri ~ project(\d*)(/?.*)$) {
set $laravel_uri $2;
###################### Note request uri mod below ##############################################
fastcgi_param REQUEST_URI $laravel_uri;
fastcgi_param LARA_ENV local; # Environment variable for Laravel
fastcgi_param HTTPS off;
# Deny .htaccess file access
location ~ /\.ht {
deny all;
view raw nginx.conf hosted with ❤ by GitHub

Hope you find this useful!

Moving From Zend Framework to Laravel 4

Let me preface this blog with the following: I have been a programmer for the past 12 yrs and have used php for nearly 8 yrs, ZF1 for nearly 3 yrs (10+projects) and ZF2 since its inception. So, I understand the framework(s) quite well and have given it a fair amount of trial time.

About 2 months ago, I stumbled upon the Laravel framework. A couple of hours into studying Laravel, I had an epiphany : This is what all other frameworks should strive to be! Easy, Functional, modern and completely out of the way. While its CodeIgniter underpinnings are easily discernible based on the folder structure and config files, it is quite a radical departure in terms of the underlying code. Laravel 4 in particular embraces all prevailing best practices in the PHP world.

In comparison, both ZF1 and ZF2 are complicated, and require a steep learning curve. I kid you not, I was up and running with Laravel 4 in about 3 hours! (properly understanding my first “hello world” with ZF2 took me a week).
In my humble opinion, ZF2 is over-engineered, demanding more attention than the actual web application at hand. The reliance on (deeply nested) arrays for everything from route configs to parameters makes coding a chore. See, while arrays are speedy and flexible, they are a debugging nightmare. You get pretty much Zero IDE support (autocomplete, code completion etc). You have to remember verbatim all the required keys (I kind of got around this using netbeans code templates.But the templates became so many in number that remembering those presented a problem!).

Two projects later, I can confidently say that there is nothing that I can do in ZF2 that I can’t do using Laravel 4 (and in lesser time). The fact that L4 ties into composer/packagist means that pretty much any open source php project(on packagist) can be utilized in a Laravel project. In fact, L4 uses ‘monolog’ for logging, ‘swiftmailer’ for emailing and ‘symfony’ for the HTTP core and command line interface. All while providing a very ‘laravelesque’ approach to coding. There really are no limits. Very cool indeed!

Although I feel a certain closeness with ZF due to the sheer amount of time I spent with it, I feel the time is right to switch to L4 for future projects. I think L4 has done a LOT of things right, and deserves credit for it. The Eloquent ORM is very easy to work with. Routing in Laravel is an absolute joy! It does much of the heavy lifting when it comes to RESTful interfaces.
Form management is beautifully implemented. It is trivial to implement custom form/html controls.The DI mechanism is very expressive.So are Filters and Events. The Blade templating engine has an uncanny resemblance to the Razr engine used by ASP.NET MVC (which I really like!).
Although it looks like L4 uses an abundance of static methods, in reality it harnesses the __callstatic() php magic method to actually load objects from the DI container. The L4 command line tool “artisan” is also very well executed. Unit testing is very simple and works right out of the box (no lengthy setups required prior to running phpunit).

L4 has indeed improved my productivity quite dramatically 🙂

ODBC Linked Tables With Access 2007 & Windows7

I find MS Access to be a convenient tool to interface with most relational database engines (most notably, MySQL and MSSQL). I am much more productive navigating through linked tables in Access and writing queries against them rather than using custom tools such as PHPMyAdmin or SSMS.

With Windows XP, this workflow as trivial. However more recent Windows 7 systems involving a mix of 64 bit h/w and 32 bit s/w coupled with user account restrictions have made RDBMS access via ODBC a little challenging.

I have documented below the method to get MyODBC (ODBC driver for MySQL) working with MS Access 2007 in a 64 bit windows install. Of course, the same process can be used to link to any other odbc dsn after installing appropriate drivers.

The problem

  1. MOST office installations are 32 bit apps. (Although 64 bit office is available, I have yet to see a production install of it- in fact, even MS strongly discourages its use). Windows 7 uses a WOW64 (Windows 32 On Windows 64) subsystem to ensure 32 bit apps work seamlessly on 64 bit machines.
  2. Windows 7 uses much tighter user account restrictions. Because ODBC creation accesses the windows registry, it is a privileged operation.I assume that you are working using a user account AND have an admin uid/password should you need privilege escalation.
  3. A combination of 1 and 2 above make creating an ODBC DSN from MS Access on win7 a slightly convoluted process(compared to xp).

The Solution

  1.  Download and Install the 32 bit driver (MSI) for MyODBC from the following location: http://dev.mysql.com/downloads/connector/odbc/ (NOTE: There is a 64 bit MyODBC driver available for download.. but it is only to be used with 64 bit office apps. So, please download only the 32 bit version!)
  2.  Navigate to C:\Windows\SysWOW64 and locate the file named odbcad32.exe. Create a shortcut to this file on your desktop so you will have a handy reference available (This is the WOW equivalent of the Data Sources(ODBC) applet found under ControlPanel->System and Security->Administrative Tools) You will launch this shortcut in order to create your (SYSTEM) DSN’s and NOT the applet in your control panel. This operation requires privilege escalation so you will be prompted for your admin uid and password in order to create your dsn.
  3.  Once you have created your system DSN’s in step 2 above, you are ready to use them from within MS Access as before.