Tag Archives: dns

Access Route53 private zones cross account

Using Route53 private zones can be a great way to maintain a private internal zone for your server infrastructure. However sometimes you may need to share this zone with another VPC in the same or in another AWS account.

The first situation is easy – a Route53 zone can be associated with any number of VPCs within a single AWS account using the AWS console.

The second is more tricky but is doable by creating a VPC association authorization request in the account with the zone, then accepting it from the other account.

# Run against the account with the zone to be shared.
aws route53 \
create-vpc-association-authorization \
--hosted-zone-id abc123 \
--vpc VPCRegion=us-east-1,VPCId=vpc-xyz123 

# Run against the account that needs access to the private zone.
aws route53 \
associate-vpc-with-hosted-zone \
--hosted-zone-id abc123 \
--vpc VPCRegion=us-east-1,VPCId=vpc-xyz123 \
--comment "Example Internal DNS Zone"

# List authori(z|s)ations once done
aws route53 \
list-vpc-association-authorizations \
--hosted-zone-id abc123

This doesn’t even require VPC peering since it works behind the scenes, with the associated zone now being resolvable using the default VPC DNS server on each zone that has been associated.

Note that the one catch is that this does not help you if you’re linking to a non-AWS VPC environment, such as an on-prem data centre via IPSec VPN or Direct Connect. Even though you can route to the VPC and systems inside it, the AWS DNS resolver for the VPC will refuse requests from IP space outside of the VPC itself.

So the only option is have an EC2 instance acting as a DNS forwarder inside the VPC, which is reachable from the linked data centre and yet since it’s in the VPC, can use the resolver.

DNC NZ submission

The DNC has proposed a new policy for .nz WHOIS data which unfortunately does not in my view address the current issues with lack of privacy of the .nz namespace. The following is my submission on the matter.

Dear DNC,

I have strong concerns with the proposed policy changes to .nz WHOIS information and am writing to request you reconsider your stance on publication of WHOIS information.

#1: Refuting requirement of public information for IT and business related contact

My background is working in IT and I manage around 600 domains for a large NZ organisation. This would imply that WHOIS data would be useful, as per your public good statement, however I don’t find this to be correct.

My use cases tend to be one of the following:

1. A requirement to get a malicious (phishing, malware, etc) site taken down.

2. Contacting a domain owner to request a purchase of their domain.

3. A legal issue (eg copyright infringement, trademarks, defamation).

4. Determining if my employer actually owns the domain marketing is trying to use today. :-)

Of the above:

1. In this case, I would generally contact the service provider of the hosting anyway since the owners of such domains tend to be unreliable or unsure how to even fix the issue. Service providers tend to have a higher level of maturity of pulling such content quickly. The service provider details can be determined via IP-address lookup and finding the hosting provider from there, rather than relying on the technical contact information which often is just the same as the registrant and doesn’t reflect the actual company hosting the site. All the registrant information is not required to complete this requirement, although email is always good for a courtesy heads up.

2. Email is satisfactory for this. Address & phone is not required.

3. Given any legal issue is handled by a solicitor, a legal request could be filed with DNC to release the private ownership information in the event that the email address of the domain owner was non responsive.

4. Accurate owner name is more than enough.

#2: Internet Abuse

I publish a non-interesting and non-controversial personal blog. I don’t belong to any minorities ethnic groups. I’m born in NZ. I’m well off. I’m male. The point being that I don’t generally attract any kind of abuse or harassment that is sadly delivered to some members of the online community.

However even I end up receiving abuse relating to my online presence on occasion in the form of anonymous abusive emails. This doesn’t phase me personally, but if I was in one of the many online minorities that can (and still do) suffer real-word physical abuses, I might not be so blasé knowing that it doesn’t take much to suddenly turn up at my home and throw abuse in person.

It’s also extremely easy for an online debate to result in a real world incident. It isn’t hard to trace a person’s social media comments to their blog/website and from there, their real world address. Nobody likes angry morons abusing them at 2am outside their house with a tire iron about their Twitter post.

#3. Cold-blooded targeting

I’ve discussed my needs as an IT professional for WHOIS data, the issue of internet abuse. Finally I wish to point out the issue of exposing one’s address publicly when we consider what a smart, malicious player can do with the information.

* With a target’s date of birth (thanks Facebook!) and their address (thanks DNC policy!) you’re in the position to fake someone’s identity for a number of NZ organisations including insurance and medical whom use these two (weak) forms of validation.

* Tweet a picture of your coffee at Mojo this morning? Excellent, your house is probably unoccupied for 8 hours, I need a new TV.

* Posting blogs about your amazing international trip? Should be a couple good weeks to take advantage of this – need a couch to go with that TV.

* Mentioned you have a young daughter? Time to wait for them at your address after school events and intercept there. Its not hard to be “Uncle Bob from the UK to take you for candy” when you have address, names, habits thanks to the combined forces of real world location and social media disclosure.

Not exposing information that doesn’t need to be public is a text-book infosec best practise to prevent social engineering type attacks. We (try to be) cautious around what we tell outsiders because lots of small bits of information becomes very powerful very quickly. Yet we’re happy for people to slap their real world home address on the internet for anyone to take advantage of because no harm could come of this?

To sum up, I request the DNC please reconsider this proposed policy and:

1. Restrict the publication of physical address and phone numbers for all private nz domains. This information has little real use and offer avenues for very disturbing and intrusive abuse and targeting. At least email abuse can be deleted from the comfort of your couch.

2. Retain the requirement for a name and contact email address to be public.However permit the publicly displayed named to be a pseudonym to preserve privacy for users whom consider themselves at risk, with the owner’s real/legal name to be held by DNC for legal contact situations.

I have no concerns if DNC was to keep business-owned domain information public. Ltd companies director contact details are already publicly available via the companies registry, and most business-owned domains simply list their place of business and their reception phone number which doesn’t expose any particular person. My concern is the lack of privacy for New Zealanders rather than businesses.

Thank you for reading. I am happy for this submission to be public.

regards.

Jethro

Funny tasting Squid Resolver

squid_logoSquid is a very popular (and time tested) proxy server, it’s generally the go-to solution for a  proxy server in a *nix environment and is capable of providing general caching proxy services (including transparent) as well as more sophisticated reverse proxy solutions.

I recently ran into an issue where Squid was refusing to resolve some DNS addresses on our network – not an uncommon problem if using a public DNS server instead of an internal-only DNS server by mistake.

The first step was to check the nameservers listed in /etc/resolv.conf and make sure they were correct and returning valid results. In this case they were, all the name servers correctly resolved the address without any issue.

Next step was to check for specific configuration in Squid – some applications like Squid and Nginx allow you to specifically set their nameservers to something other than the contents of /etc/resolv.conf. In this case, there was no such configuration, in fact there was no configuration relating to DNS at all, meaning it would have to fall back to the operating system resolver.

Or does it? Generally Linux applications use the OS resolver which follows a set order to discover hosts defined explicitly in /etc/hosts, or tries the nameservers in /etc/resolv.conf. When either file is changed, the changes are reflected immediately on the next query for those addresses.

However Squid has it’s own approach. Unless it’s using DNS name servers specifically defined in it’s configuration file, instead of using the OS resolver it reads the configuration in /etc/resolv.conf as a once-off startup action, then continues to use the name servers that were defined for the lifetime of the process.

You can see this in the logs – at startup time Squid logs the servers it’s using in cache.log:

# grep nameserver /var/log/squid/cache.log
2014/07/02 11:57:37| Adding nameserver 192.168.1.10 from /etc/resolv.conf

From this, the sequence of events is simple to figure out:

  1. A server was brought online, using a public DNS server that lacked some of our internal records.
  2. Squid was started up, reading in that DNS server from /etc/resolv.conf.
  3. The DNS server addresses were corrected and immediately resolved all other applications – but Squid stuck with the old address still, so continued to refuse the queries.

Resolving the immediate issue is as simple as restarting the Squid process to force it to pickup the new resolver settings. But what if your DNS server values could change at any future stage without warning?

If you’re using Puppet, you could use a custom fact (like this one) that exposes the current name servers on the system, then writes them into the Squid configuration file using the dns_nameservers configuration parameter and notifies the Squid service to reload on any change of the configuration file.

Or if your squid server is always going to be using a particular DNS server, regardless of what the host is using, you can simply set the dns_nameservers parameter in Squid to point to the desired servers.

Route53 with NamedManager 1.8.0

Just released NamedManager 1.8.0, my open source web-based DNS management tool. This release fixes some bugs with MySQL 5.6 and internationalized domain names, but also includes support for using Amazon AWS Route53 alongside the existing Bind9 support.

Just add a name server entry with the type of Route53 and your Amazon credentials and a background process will sync all DNS changes to Route53. You can mix and match thanks to the groups feature, so if you want some zones going to both Bind9 and Route53 and others going to just Route53 or Bind9, you can do so.

NamedManager, now with cloudy goodness.

NamedManager, now with cloudy goodness.

As always, the easiest installation is from the provided RPMs, however you can also install from tarball or from Git – just refer to the installation documentation.

This feature is considered stable, however it is new, so be wary for bugs and issues – and report any issues you encounter back to me via email or the project manager issue tracker.

Exposing name servers with Puppet Facts

Carrying on from the last post, I needed a good reliable way to point my Nginx configuration at a DNS server to use for resolving backends. The issue is that I wanted my Puppet module to be portable across various environments, some which block outbound DNS traffic to external services and others where the networks may be redefined on a frequent basis and maintaining an accurate list of all the name servers would be difficult (eg the cloud).

I could have used dnsmasq to setup a localhost resolver, but when it comes to operational servers, simplicity is key – having yet another daemon that could crash or cause problems is never desirable if there’s a simpler way to solve the issue.

Instead I used Facter (sic), Puppet’s tool for exposing values pulled from the system into variables that can be used in your Puppet manifests or templates. The following custom fact is included in my Puppet module and is run before any configuration is applied to the host running my Nginx configuration:

#!/usr/bin/env ruby
#
# Returns a string with all the IPs of all configured nameservers on
# the server. Useful for including into applications such as Nginx.
#
# I live in mymodulenamehere/lib/facter/nameserver_list.rb
# 

Facter.add("nameserver_list") do
    setcode do
      nameserver = false

      # Find all the nameserver values in /etc/resolv.conf
      File.open("/etc/resolv.conf", "r").each_line do |line|
        if line =~ /^nameserver\s*(\S*)/
          if nameserver
            nameserver = nameserver + " " + $1
          else
            nameserver = $1
          end
        end
      end

      # If we can't get any result (bad host config?) default to a
      # public DNS server that is likely to be reachable.
      unless nameserver
        nameserver = '8.8.8.8'
      end

      nameserver
    end
end

On a system with a typically configured /etc/resolv.conf file such as:

search example.com
nameserver 192.168.0.1
nameserver 10.1.1.1

The fact will expose the nameservers in a space-delineated string such as:

# facter -p | grep 'nameserver_list'
nameserver_list => 192.168.0.1 10.1.1.1

I can then use the Fact inside my Puppet templates for Nginx to configure the resolver:

server {
    ...
    resolver <%= @nameserver_list %>;
    resolver_timeout 1s;
    ...
}

This works pretty well, but there are a couple things to watch out for:

  1. If the Fact fails to execute at all, your configuration will be broken. Having said that, it’s a very simple Fact and there’s not a lot that really could fail (eg no dependencies on other apps/non-standard resources).
  2. Linux hosts resolve DNS using the nameservers specified in the order in /etc/resolv.conf. If one fails, they move on and try the next. However Nginx differs, and just uses the list of provides nameservers in round-robin fashion. This is fine if your nameservers are all equals, but if some are more latent or less reliable than others, it could cause slight delays.
  3. You want to drop the resolver_timeout to 1 second, to ensure a failing nameserver doesn’t hold up re-resolution of DNS for too long. Remember that this re-resolution should only occur when the TTL of the DNS records for the backend has expired, so even if one DNS server is bad, it should have almost no impact to performance for your requests.
  4. Nginx isn’t going to pickup stuff in /etc/hosts using these resolvers. This should be common sense, but thought I better put that out there just-in-case.
  5. This Ruby could be better, but I’m not a dev and hacked it up in 15mins. The regex should probably also be improved to handle some of the more exotic /etc/resolv.confs that I’m sure people manage to write.

Nginx, reverse proxies and DNS resolution

Nginx is a pretty awesome high performance web server and reverse proxy. It’s often used in conjunction with other HTTP servers such as Java/Tomcat and Ruby/Unicorn, as it allows static content to be served directly from disk by Nginx and for connections from slow clients to be queued and buffered by Nginx, rather than taking up time of the expensive/scarce application server worker processes.

 

A typical Nginx reverse proxy configuration to a single backend using proxy_pass to a local HTTP server application on port 8080 would look something like this:

server {
    ...
    proxy_pass http://localhost:8080
    ...
}

Another popular approach is having a defined upstream group (which can be used for multiple servers, or a single one if desired), for example:

upstream upstream-localhost {
    server localhost:8080;
}

server {
    ...
    proxy_pass http://upstream-localhost;
    ...
}

Generally this configuration works fine for most of our use cases – we typically have a 1-to-1 mapping between a backend application server and Nginx, so the configuration is very simple and reliable – any issues are usually with the backend application, rather than Nginx itself.

 

However on occasion there are times when it’s desirable to have Nginx talking to a backend on another server.

I recently implemented an OAuth2 gateway using Nginx-Lua, with the Nginx gateway doing the OAuth2 authentication in a small Lua module before passing the request through to the backend application. This configuration ran on a pair of bastion servers, which reverse proxy the request through to an Amazon ELB which load balances a number of application servers.

This works perfectly 95% of the time, but Amazon ELBs (even internal) have a tendency to change their IP addresses. Normally this doesn’t matter, since you never reference ELBs via their IP address and use their DNS name instead, but the default behaviour of the Nginx upstream and proxy modules is to resolve DNS at startup, but not to re-resolve DNS during the operation of the application.

This leads to a situation where the Amazon ELB IP address changes, Amazon update the DNS record, but Nginx never re-resolves the DNS record and stays pointing at the old IP address. Subsequently requests to the backend start failing once Amazon drops services from the old IP address.

This lack of re-resolution of backends is a known limitation/issue with Nginx. Thankfully there is a workaround to force Nginx to re-resolve addresses, as per this mailing list post by setting proxy_pass to a variable, which then forces re-resolution of the DNS names as Nginx treats variables differently to static configuration.

server {
    ...
    resolver 127.0.0.1;
    set $backend_upstream "http://dynamic.example.com:80";
    proxy_pass $backend_upstream;
    ...
}

 

A resolver (DNS server address) also needs to be configured. When using parametrised backends, a resolver must be configured in Nginx (it is unable to use the local OS resolver) and must point directly to a name server IP address.

If your name servers aren’t predictable, you could install something like dnsmasq to provide a local resolver on 127.0.0.1 which then forwards to the dynamically assigned name server, or take the approach of pulling the name server details from the host using something like Puppet Facts and then writing it into the configuration file when it’s generated on the host.

Nginx >= 1.1.9 will re-resolve DNS records based on their TTL, but it’s possible to override this with any value desired. To verify correct behaviour, tcpdump will quickly show whether re-resolution is working.

# tcpdump -i eth0 port 53
15:26:00.338503 IP nginx.example.com.53933 > 8.8.8.8.domain: 15459+ A? dynamic.example.com. (54)
15:26:00.342765 IP 8.8.8.8.domain > nginx.example.com.53933: 15459 1/0/0 A 10.1.1.1 (70)
...
15:26:52.958614 IP nginx.example.com.48673 > 8.8.8.8.domain: 63771+ A? dynamic.example.com. (54)
15:26:52.959142 IP 8.8.8.8.domain > nginx.example.com.48673: 63771 1/0/0 A 10.1.1.2 (70)

It’s a bit of an annoyance in an otherwise fantastic application, but as long as you are aware of the limitation, it is not too difficult to resolve the issue by a bit of configuration adjustment.

NamedManager 1.6.0

I’ve just finished up a few changes to NamedManager this weekend and released version 1.6.0. It provides a few bug fixes and small improvements, as well as the addition of support for IPv6 PTR (reverse) records, so you can now maintain both forwards and reverse DNS for both IPv4 and IPv6 with NamedManager.

IPv6 AAAA records on a domain

IPv6 AAAA records on a domain

When you add records with NamedManager, you can have a reverse PTR record added for your particular A or AAAA record by ticking a checkbox. NamedManager then generates the appropriate reverse record for you, simplifying the process of managing DNS.

IPv6 PTR records

IPv6 PTR records

If you’re interested in NamedManager you can download NamedManager from my project website (Tarball or Git), from GitHub, or download RPMs for RHEL/CentOS 5/6.

NamedManager 1.5.1

I’ve pushed a new release of NamedManager version 1.5.1, this release is a minor bug fix release providing:

  1. Bug fix for handling of TXT records, where extra slashes would be entered into the record due to an input validator bug.
  2. The Bind configuration writer now runs the Bind-supplied validators for configuration and DNS zone files and refuses to reload Bind without them passing

The first change is naturally important if you’re using TXT records as it does fix a serious issue with the handling of TXT records (no security problems, but corrupted zonefiles would result at times).

Even if you’re not using TXT records, the second change is worth upgrading to as it makes the Bind configuration generator much more robust and prevents any potential future bugs from ever feeding Bind a bad zonefile.

Pre-1.5.1, we relied on Bind’s reload process to validate the files, however this suffers an issue where the error might not be reported back to the user and they would only discover the issue next time Bind restarts. This changes prevents a new zonefile from being loaded into place until the validator passes it, so the worst case is your DNS just refuses to accept changes, whilst logging loudly in the web interface back to you. :-)

If you upgrade, take advantage of this feature, by adding the following to /etc/namedmanager/config-bind.php or wherever you have installed your Bind component configuration file to:

$config["bind"]["verify_zone"]    = "/usr/sbin/named-checkzone";
$config["bind"]["verify_config"]  = "/usr/sbin/named-checkconf";

NamedManager 1.5.1 can be found at the project page or in my packaged repositories.

Presenting NamedManager

A while ago I had a project to build a DNS management application for a client, which has since been refined and improved further, and finally released as “NamedManager” now that I’ve had time to re-do the documentation for a public audience.

NamedManager is an AGPL web-based DNS management system designed to make the adding, adjusting and removal of zones/records easy and reliable by a simple yet effective interface.

Rather than attempting to develop a new name server, NamedManager supports the tried and tested Bind name server and can integrate nicely into existing complex Bind configurations including servers with multiple views without clobbering custom configurations.

Configuring zone records with NamedManager.

Configuring zone records with NamedManager.

It’s written in PHP 5 and uses a MySQL database for storing the DNS record information, which is then converted into Bind compatible configuration files and copied to the name servers, an act which ensures that any loss of the NamedManager application or database will not result in a loss of DNS services.

It’s a stable application, having been in some large production environments for over a year, although there’s certainly more work wanted on the application, such as the addition of IPv6 PTR records and improved UI around SRV and SPF record entry.

NamedManager includes an interface for tracking the sync status of the latest changes across all your name servers, as well as understanding the differences between internal only and publicly accessible name servers and generating the appropriate NS records for domains automatically.

An included daemon can (optional) watch the Bind name server logs and send them back to the web interface, so that you can watch all your name servers via an AJAX log interface to make it easier to watch for issues or debug queries.

Server status report - see if your hosts have synced DNS changes and are reporting logs.

Server status report – see if your hosts have synced DNS changes and are reporting logs.

Both forwards domains for IPv4 and IPv6 are supported, and IPv4 reverse domains are also supported (IPv6 reverse to come in future release), along with the ability to import Bind zone files (works for most, unless yours is too ugly/complex) into the application.

View of all the domains active on this DNS cluster with NamedManager.

View of all the domains active on this DNS cluster with NamedManager.

For developers, NamedManager features a SOAP API which can be used to manage DNS records – this has been used to hook into other provisioning tools (eg: cloud instance management tools) to reduce manual effort for keeping records clean and tidy,

The code structure of NamedManager would make it possible to add support for additional name servers as desired, I would be keen to see support for PowerDNS and Amazon Route 53 as options in future – as always, patches welcome. ;-)

If you’re interested in checking it out, view the NamedManager project page here and follow the instructions to install from RPM, source tarball or SVN.