Single DHCP server for multiple subnets (VLANs) one single interface

Surprisingly this was an extremely hard to find piece of information on the topic. At least one that fit my need. There were lots of questions in various online posts, but no completely working answers with all the relevant details in one place.

I am going to document it here.

Aggregation router is a pair of Cisco 6506E in VSS mode, active-active. They have ip helper pointing to my DHCP server.

Multiple VLANs and subnets

There was a pretty useful post about single DHCP server, multiple subnets on one interface here. But this does not work for my situation. He’s using a fairly simple network, and his DHCP server run on the gateway.

I have a gateway/router that aggregates multiple VLANs, one of which is a management VLAN that my DHCP server sits on. All the other VLANs has the DHCP relay helper address pointing to my DHCP server (see graph above).

Using the “shared-network” statement in dhcpd.conf does not work as that will pool all of the subnet declaration into that single network. This is why the blog post uses the classes along with “match if” statements to put DHCP client requests into the correct subnets. I have anywhere from 200 to 300+ servers in each VLAN, and they are a mix of gears/vendors. There is no way that I can use hardware (MAC) address, without it getting very complicated, not to mention the horror of maintaining that mapping.

ISC DHCP actually supports what I wanted out of the box. The trick was to make all the subnet declaration, but don’t use the “shared-network” statement. Make sure the DHCP relay are setup correctly, and when client make DHCP requests, they will arrive at the DHCP server with the relay address in it as the GIADDR (gateway IP address). The DHCP server will see that and know which subnet it should provide addresses from.

Here is the dhcpd.conf portion of the working config. Note that I also do PXE and kickstart boot from this dhcpd server.

authoritative;

# this is the most important line. It specifies the method
# to use to connect to the DNS server and update it.
ddns-update-style none;
ddns-domainname "example.com";
ignore client-updates;
option host-name = config-option server.ddns-hostname;

include "/etc/rndc.key";

option domain-name              "example.com";
option domain-name-servers      10.1.14.10,10.1.14.11,10.1.14.12;
option time-offset              -18000; # Pacific Standard Time
option ntp-servers              10.1.14.11;
one-lease-per-client            off;
default-lease-time              86400;
max-lease-time                  604800;
option                          ip-forwarding off;

# PXE
next-server install;
filename "/linux-install/pxelinux.0";

# Subnet for internal hosts
    subnet 10.1.0.0 netmask 255.255.254.0 {
        range 10.1.1.200 10.1.1.253;
        option routers                  10.1.0.1;
        option subnet-mask              255.255.254.0;
        #failover peer "dhcp";
    }

    subnet 10.1.2.0 netmask 255.255.254.0 {
        range 10.1.3.200 10.1.3.253;
        option routers                  10.1.2.1;
        option subnet-mask              255.255.254.0;
        #failover peer "dhcp";
    }

    subnet 10.1.4.0 netmask 255.255.254.0 {
        range 10.1.5.200 10.1.5.253;
        option routers                  10.1.4.1;
        option subnet-mask              255.255.254.0;
        #failover peer "dhcp";
    }

....and so on....

Using ldap and kerberos with ajaxplorer

12/18/12 Update: not all is peachy keen. Login and autocreate account works, but logout can be an issue. I need to clear the session cookie when someone logout. Have not gotten around to coding that yet.

After a bit of fiddling around, I finally got ajaxplorer working with (ldap) kerberos5 as the backend authentication/access.

We are using ldap for users directory and kerberos5 for password. It’s a little bit different than what I am used to.

Anyway, I needed to get ajaxplorer working on a large filer for users to be able to access — locally and remotely — essentially our private ‘dropbox’. But getting ajaxplorer working with kerberos was a bitch! At first, I tried using ldap, got that working…. except ldap does not have our password, that’s where kerberos comes in. I thought about writing my own plugin, but damn it, I don’t have time for this.

After lots of googling, experimenting, etc. I found mod_auth_pam, which uses pam for basic HTTP auth. And since we are already using pam_krb5 for logins on our boxes, it’s a perfect solution.

Here is the section in my bootstrap_plugins.php:

$PLUGINS = array(
        "CONF_DRIVER" => array(
                "NAME"          => "serial",
                "OPTIONS"       => array(
                        "REPOSITORIES_FILEPATH" => "AJXP_DATA_PATH/plugins/conf.serial/repo.ser",
                        "ROLES_FILEPATH"        => "AJXP_DATA_PATH/plugins/auth.serial/roles.ser",
                        "USERS_DIRPATH"         => "AJXP_DATA_PATH/plugins/auth.serial",
                        "FAST_CHECKS"           => false,
                        "CUSTOM_DATA"           => array(
                                        "email" => "Email",
                                        "country" => "Country"
                                )
                        )
        ),
        "AUTH_DRIVER" => array(
                "NAME"          => "basic_http",
                "OPTIONS"       => array(
                        "USERS_FILEPATH" => "AJXP_DATA_PATH/plugins/auth.pam/users.ser",
                        "AUTOCREATE_AJXPUSER"   => true,
                        "TRANSMIT_CLEAR_PASS"   => false
                )
        ),
        array(
                "NAME"          => "serial",
                "OPTIONS"       => array(
                        "LOGIN_REDIRECT"        => false,
                        "USERS_FILEPATH"        => "AJXP_DATA_PATH/plugins/auth.serial/users.ser",
                        "AUTOCREATE_AJXPUSER"   => false,
                        "FAST_CHECKS"           => false,
                        "TRANSMIT_CLEAR_PASS"   => false
                )
        ),
        "LOG_DRIVER" => array(
                "NAME" => "text",
                "OPTIONS" => array(
                        "LOG_PATH" => (defined("AJXP_FORCE_LOGPATH")?AJXP_FORCE_LOGPATH:"AJXP_INSTALL_PATH/data/logs/"),
                        "LOG_FILE_NAME" => 'log_' . date('m-d-y') . '.txt',
                        "LOG_CHMOD" => 0770
                )
        )
);

And the section in my /etc/httpd/conf.d/ajaxplorer.conf file:

   < Directory "/usr/share/ajaxplorer">
        Options FollowSymLinks
        AllowOverride Limit FileInfo
        Order allow,deny
        Allow from all
        AuthName "Ajaxplorer Access"
        AuthType Basic
        AuthPAM_Enabled on
        Require valid-user
  	php_value error_reporting 2
   < /Directory>

The trick is these two lines for the “basic_http” auth_driver:


"USERS_FILEPATH" => "AJXP_DATA_PATH/plugins/auth.pam/users.ser",
"AUTOCREATE_AJXPUSER" => true,

That then allow my users to login, and on first time, they auth via mod_auth_pam, and ajaxplorer create their account in “AJXP_DATA_PATH/plugins/auth.pam/users.ser”.

NOTE I have to manually create the directory plugins/auth.pam and create an empty users.ser file.

But after that, everything work perfectly.

MongoDB and Riak

12/18/12 UPDATE

Since I am the only DevOps working on this, and there are tons of other things requiring my attention, I had to drop riak. The engineers only know mongodb anyway, and they are reluctant to learn a new nosql (riak). Crap! So this project had been killed. Too bad.

I have some python scripts that I wrote to copy mongodb collections over to riak, if I have time, I’ll open source them.

======================

I’ve been working with MongoDB at current $WORK and previous jobs. It (used-to-be) is the nice, shiny toy that everyone rushed to. I’ve run into numerous limitation in trying to scale it up. Operationally, it can be a nightmare if the architecture was not setup correctly at the beginning.

Mongo is also a PITA to scale. There are major sites that have Mongos in the thousands, but at that point it become throwing hw and money at the problem. That just seem stupid for startups.

So at current place of $WORK, they are currently testing MongoDB, but I wanted to look for an alternative solution before we become fully committed to yet another operational nightmare.

After a lot of googling, testing, experimenting, etc. I decide on trying Riak from Basho.

Googling shows a number of companies migrated from MongoDB to Riak. Their experiences was useful, but I was looking more for concrete HOWTO to move large MongoDB over to Riak.

First, of course was to get hands-on experiences with Riak. Installed, play with it, etc. Then I used the riak-python-client lib to start migrating some data over. I wrote a script to work through all collections in a Mongo DB, for each collection, create a Riak bucket and add the Mongo doc to bucket using the Mongo _id as the key.

Right away, I run into some issues with Riak. I have a 3 nodes Riak (on 3 physical CentOS 5.8 servers). The MongoDB I was copying over was large, about 2GB on disk file size and over a million records. Partway through the conversion, 2 Riak nodes crashed and died…. WTF! No matter what I do, they wouldn’t start back up (Riak log shows some kind of erlang errors, but I don’t know erlang). So I stopped the only node running, ‘rm -rf /var/lib/riak/*’, ‘killall epmd’, restarted all 3 nodes and they came back up.

I don’t have time to debug this problem, so restarted conversion with a smaller subset of Mongo data. But this crash worries me. The erl_crash.dump shows Riak run into resource issues, unable to allocate heap memory. Hmmm.

More on my adventure in evaluating Riak vs MongoDB in the future.

dynamic robots.txt file in Rails 3.x

We have a need for dynamic handling of robots.txt file as we have different requirements for production, staging, dev, test, etc.

Google-fu shows various way to do this, some for Rails 2.x, some for Rails 3.x. Here is my version.

First is to edit config/routes.rb and add this line:


match '/robots.txt' => RobotsGenerator

Then add the following to app_root/lib/classes/robots_generator.rb.

NOTE: We have an old domain, foo.com, that redirects to our newfoo.com. We don’t want foo.com to get indexed, so I have special treatment for that in production

class RobotsGenerator
  # Use the config/robots.txt in production.
  # Disallow everything for all other environments.
  def self.call(env)
    req = ActionDispatch::Request.new(env)
    headers = {}
    body = if Rails.env.production?
      if req.host.downcase =~ /foo.com$/
        headers = { 'X-Robots-Tag' => "noindex,nofollow" }
        "User-agent: *\nDisallow: /"
      else
        File.read Rails.root.join('config', 'robots.txt')
      end
    else
        "User-agent: *\nDisallow: /"
    end

    [200, headers, [body]]
  rescue Errno::ENOENT
    [404, {}, "User-agent: *\nDisallow: /"]
  end
end

Finally, you want to move public/robots.txt to config/robots.txt.

I want to give credits to the people that inspired my version.

My Latest brites on Britely site

11/08/2013

Britely was purchased by Groupon in mid-2011.  I’ve went on to work somewhere.  Their site is gone, so all the links below are broken.

 


 

Did you know that you can create your very own Brites on Britely website?

 

This should always contain the most up-to-date list of all the brites I created and published on Britely.

 

[britely=http://www.britely.com/tinleorg2/funny-signs width=”460″]

[britely=http://www.britely.com/tinleorg2/do-you-feel width=”460″]

[britely=http://www.britely.com/tinleorg2/guide-to-online-streaming-video width=”460″]

[britely=http://www.britely.com/tinleorg2/dots-life width=”460″]

[britely=http://www.britely.com/tinleorg2/yoda-force-follow-it width=”460″]