Monthly Archives: May 2016

DNC NZ submission

The DNC has proposed a new policy for .nz WHOIS data which unfortunately does not in my view address the current issues with lack of privacy of the .nz namespace. The following is my submission on the matter.

Dear DNC,

I have strong concerns with the proposed policy changes to .nz WHOIS information and am writing to request you reconsider your stance on publication of WHOIS information.

#1: Refuting requirement of public information for IT and business related contact

My background is working in IT and I manage around 600 domains for a large NZ organisation. This would imply that WHOIS data would be useful, as per your public good statement, however I don’t find this to be correct.

My use cases tend to be one of the following:

1. A requirement to get a malicious (phishing, malware, etc) site taken down.

2. Contacting a domain owner to request a purchase of their domain.

3. A legal issue (eg copyright infringement, trademarks, defamation).

4. Determining if my employer actually owns the domain marketing is trying to use today. :-)

Of the above:

1. In this case, I would generally contact the service provider of the hosting anyway since the owners of such domains tend to be unreliable or unsure how to even fix the issue. Service providers tend to have a higher level of maturity of pulling such content quickly. The service provider details can be determined via IP-address lookup and finding the hosting provider from there, rather than relying on the technical contact information which often is just the same as the registrant and doesn’t reflect the actual company hosting the site. All the registrant information is not required to complete this requirement, although email is always good for a courtesy heads up.

2. Email is satisfactory for this. Address & phone is not required.

3. Given any legal issue is handled by a solicitor, a legal request could be filed with DNC to release the private ownership information in the event that the email address of the domain owner was non responsive.

4. Accurate owner name is more than enough.

#2: Internet Abuse

I publish a non-interesting and non-controversial personal blog. I don’t belong to any minorities ethnic groups. I’m born in NZ. I’m well off. I’m male. The point being that I don’t generally attract any kind of abuse or harassment that is sadly delivered to some members of the online community.

However even I end up receiving abuse relating to my online presence on occasion in the form of anonymous abusive emails. This doesn’t phase me personally, but if I was in one of the many online minorities that can (and still do) suffer real-word physical abuses, I might not be so blasé knowing that it doesn’t take much to suddenly turn up at my home and throw abuse in person.

It’s also extremely easy for an online debate to result in a real world incident. It isn’t hard to trace a person’s social media comments to their blog/website and from there, their real world address. Nobody likes angry morons abusing them at 2am outside their house with a tire iron about their Twitter post.

#3. Cold-blooded targeting

I’ve discussed my needs as an IT professional for WHOIS data, the issue of internet abuse. Finally I wish to point out the issue of exposing one’s address publicly when we consider what a smart, malicious player can do with the information.

* With a target’s date of birth (thanks Facebook!) and their address (thanks DNC policy!) you’re in the position to fake someone’s identity for a number of NZ organisations including insurance and medical whom use these two (weak) forms of validation.

* Tweet a picture of your coffee at Mojo this morning? Excellent, your house is probably unoccupied for 8 hours, I need a new TV.

* Posting blogs about your amazing international trip? Should be a couple good weeks to take advantage of this – need a couch to go with that TV.

* Mentioned you have a young daughter? Time to wait for them at your address after school events and intercept there. Its not hard to be “Uncle Bob from the UK to take you for candy” when you have address, names, habits thanks to the combined forces of real world location and social media disclosure.

Not exposing information that doesn’t need to be public is a text-book infosec best practise to prevent social engineering type attacks. We (try to be) cautious around what we tell outsiders because lots of small bits of information becomes very powerful very quickly. Yet we’re happy for people to slap their real world home address on the internet for anyone to take advantage of because no harm could come of this?

To sum up, I request the DNC please reconsider this proposed policy and:

1. Restrict the publication of physical address and phone numbers for all private nz domains. This information has little real use and offer avenues for very disturbing and intrusive abuse and targeting. At least email abuse can be deleted from the comfort of your couch.

2. Retain the requirement for a name and contact email address to be public.However permit the publicly displayed named to be a pseudonym to preserve privacy for users whom consider themselves at risk, with the owner’s real/legal name to be held by DNC for legal contact situations.

I have no concerns if DNC was to keep business-owned domain information public. Ltd companies director contact details are already publicly available via the companies registry, and most business-owned domains simply list their place of business and their reception phone number which doesn’t expose any particular person. My concern is the lack of privacy for New Zealanders rather than businesses.

Thank you for reading. I am happy for this submission to be public.

regards.

Jethro

Faking a Time Capsule with a GNU/Linux server

Apple MacOS’s Time Machine feature is a great backup solution for general desktop use, but has some annoying limitations such as only working with either locally attached storage devices or with Apple’s Time Capsule devices.

Whilst the Time Capsules aren’t bad devices, they offer a whole bunch of stuff I already have and don’t need – WiFi access point, ethernet router, and network attached storage and they’re not exactly cheap either. They also don’t help anyone wanting to backup to an off-site cloud server/VPS via a VPN.

So instead of a Time Capsule, I’m using a project called netatalk to allow a GNU/Linux server to provide an AFP file share to MacOS which acts as a Time Machine suitable target.

There’s an annoyance with Time Machine where it only officially works with AFP shares specially flagged as “Time Machine” shares. So whilst Apple has embraced SMB2 as the file sharing protocol of future use, you can’t use SMB2 for Time Machine backups (Well technically you can by enabling unsupported volumes in MacOS, but then you lack the ability to restore from backup via the MacOS recovery tools).

To make life easy, I’ve written a Puppet module that install netatalk and configures a Debian GNI/Linux server to act as a Time Capsule for all local users.

After installing the Puppet module (r10k or puppet module tools), you can simply define the directory and how much space to report to each client:

class { 'timemachine':
  location     => '/mnt/backup/timemachine',
  volsizelimit => '1000000', # 1TB per user backing up
}

To setup each MacOS machine, you will need to first connect to the share using Finder. You can do this with Finder -> Go -> Connect to Server and then entering afp://SERVERNAME and authenticating with your PAM credentials for the server.

After connecting, the share should now appear under Time Machine preferences. If you experience any issues connecting, check the /var/log/afpd.log file for debug information on the server – common issues include not having created the directories for the shares or having incorrect permissions on them.

Easy IKEv2 VPN for mobile devices (inc iOS)

I recently obtained an iPhone and needed to connect it to my VPN. However my existing VPN server was an OpenVPN installation which works lovely on traditional desktop operating systems and Android, but the iOS client is a bit more questionable having last been updated in September 2014 (pre iOS 9).

I decided to look into what the “proper” VPN option would be for iOS in order to get something that should be supported by the OS as smoothly as possible. Last time I looked this was full of wonderful horrors like PPTP (not actually encrypted!!) and L2TP/IPSec (configuration hell), so I had always avoided like the plague.

However as of iOS 9+, Apple has implemented support for IKEv2 VPNs which offers an interesting new option. What particularly made this option attractive for me, is that I can support every device I have with the one VPN standard:

  • IKEv2 is built into iOS 9 and MacOS El Capitan.
  • IKEv2 is built into Windows 10.
  • Works on Android with a third party client (hopeful for native integration soonish?).
  • Naturally works on GNU/Linux.

Whilst I love OpenVPN, being able to use the stock OS features instead of a third party client is always nice, particularly on mobile where power management and background tasks behaviour can be interesting.

IKEv2 on mobile also has some other nice features, such as MOBIKE, which makes it very seamless when switching between different networks (like the cellular to WiFi dance we do constantly with phones/tablets). This is something that OpenVPN can’t do – whilst it’s generally fast and reliable at establishing a connection, a change in the network means issuing a reconnect, it doesn’t just move the current connection across.

 

Given that I run GNU/Linux servers, I went for one of the popular IPSec solutions available on most distributions – StrongSwan.

Unfortunately whilst it’s technical capabilities are excellent, it’s documentation isn’t great. Best way to describe it is that every option is documented, but what options and why you’d want to use them? Not so much. The “left” vs “right” style documentation is also a right pain to work with, it’s not a configuration format that reads nicely and clearly.

Trying to find clear instructions and working examples of configurations for doing IKEv2 with iOS devices was also difficult and there’s some real traps for young players such as generating SHA1 certs instead of SHA2 when using the tools with defaults.

The other fun is that I also wanted my iOS device setup properly to:

  1. Use certificate based authentication, rather than PSK.
  2. Only connect to the VPN when outside of my house.
  3. Remain connected to the VPN even when moving between networks, etc.

I found the best way to make it work, was to use Apple Configurator to generate a .mobileconfig file for my iOS devices that includes all my VPN settings and certificates in an easy-to-import package, but also (critically) allows me to define options that are not selectable to end users, such as on-demand VPN establishment.

 

After a few nights of messing around and cursing the fact that all the major OS vendors haven’t just implemented OpenVPN, I managed to get a working connection. To avoid others the same pain, I considered writing a guide – but it’s actually a really complex setup, so instead I decided to write a Puppet module (clone from github / or install from puppetforge) which does the following heavy lifting for you:

  • Installs StrongSwan (on a Debian/derived GNU/Linux system).
  • Configures StrongSwan for IKEv2 roadwarrior style VPNs.
  • Generates all the CA, cert and key files for the VPN server.
  • Generates each client’s certs for you.
  • Generates a .mobileconfig file for iOS devices so you can have a single import of all the configuration, certs and ondemand rules and don’t have to have a Mac to use Apple Configurator.

This means you can save yourself all the heavy lifting and setup a VPN with as little as the following Puppet code:

class { 'roadwarrior':
   manage_firewall_v4 => true,
   manage_firewall_v6 => true,
   vpn_name           => 'vpn.example.com',
   vpn_range_v4       => '10.10.10.0/24',
   vpn_route_v4       => '192.168.0.0/16',
 }

roadwarrior::client { 'myiphone':
  ondemand_connect       => true,
  ondemand_ssid_excludes => ['wifihouse'],
}

roadwarrior::client { 'android': }

The above example sets up a routed VPN using 10.10.10.0/24 as the VPN client range and routes the 192.168.0.0/16 network behind the VPN server back through. (Note that I haven’t added masquerading options yet, so your gateway has to know to route the vpn_range back to the VPN server).

It then defines two clients – “myiphone” and “android”. And in the .mobileconfig file generated for the “myiphone” client, it will specifically generate rules that cause the VPN to maintain a constant connection, except when connected to a WiFi network called “wifihouse”.

The certs and .mobileconfig files are helpfully placed in  /etc/ipsec.d/dist/ for your rsync’ing pleasure including a few different formats to help load onto fussy devices.

 

Hopefully this module is useful to some of you. If you’re new to Puppet but want to take advantage of it, you could always check out my introduction to Puppet with Pupistry guide.

If you’re not sure of my Puppet modules or prefer other config management systems (or *gasp* none at all!) the Puppet module should be fairly readable and easy enough to translate into your own commands to run.

There a few things I still want to do – I haven’t yet done IPv6 configuration (which I’ll fix since I run a dual-stack network everywhere) and I intend to add a masquerade firewall feature for those struggling with routing properly between their VPN and LAN.

I’ve been using this configuration for a few weeks on a couple iOS 9.3.1 devices and it’s been working beautifully, especially with the ondemand configuration which I haven’t been able to do on any other devices (like Android or MacOS) yet unfortunately. The power consumption overhead seems minimal, but of course your mileage may vary.

It would be good to test with Windows 10 and as many other devices as possible. I don’t intend for this module to support non-roadwarrior type configs (eg site-to-site linking) to keep things simple, but happy to merge any PRs that make it easier to connect more mobile devices or branch routers back to a main VPN host. Also happy to merge PRs for more GNU/Linux distribution support- currently only support Debian/Ubuntu, but it shouldn’t be hard to add others.

If you’re on Android, this VPN will work for you, but you may find the OpenVPN client better and more flexible since the Android client doesn’t have the same level of on demand functionality that iOS has built in. You may also find OpenVPN a better option if you’re regularly using restrictive networks that only allow “HTTPS” out, since it can be run on TCP/443, whereas StrongSwan IKEv2 runs on UDP port 500 or 4500.

Ubiquiti UniFi video lack of SSL/TLS validation

Posting this here since I’ve filed a disclosure with Ubiquiti on Feb 28th 2016 and had no acknowledgment other than to be patient. But two months of not even looking at what is quite a serious issue isn’t acceptable to me.

I do really like the Unifi Video product (hardware + software) so it’s a shame it’s let down by poor transport security and slow addressing of security issues by the vendor. I intend to write up a proper review soon, but it was more important to get this report out first.

My mitigation recommendation is that you only communicate with your Unifi Video systems via secure encrypted VPN, eg IKEv2 or OpenVPN until such time that Ubiquiti takes this seriously and patch their shit.


28th Feb 2016 – Disclosure of issue via HackerOne (#119121).

There is a SSL/TLS certificate validation flaw on the Unifi Video application for Android and iOS where it accepts any self-signed certificate served by the Unifi Video server silently allowing a malicious third party to intercept data.

Versions of software used;

  • Unifi Video 3.1.2 (server)
  • Android app 1.1.3 (Build 153)
  • iOS app 1.1.7 (Build 1.1.48)

Impact
Any man-in-the-middle attacker could intercept customers using Unifi Video from mobile devices by replacing the secure connection with their own self-signed certificate, capturing login password, all video content and being able to use this in future to view any cameras at their leisure.

Steps to reproduce:

  1. Perform clean installation of Unifi Video server.
  2. Connect to the web interface via browser. Self-signed cert, so have to accept cert.
  3. Connect to NVR via the Android app. No cert acceptance needed.
  4. Connect to NVR via the iOS app. No cert acceptance needed.
  5. Erase the previously generated keystore on server with: echo -n “” > /usr/lib/unifi-video/data/keystore
  6. Restart server with: /etc/init.d/unifi-video restart
  7. We now have the server running with a new cert. You can validate that, by refreshing the browser session and it will require re-acceptance of the new self-signed certificate and can see new generation time & fingerprint of new cert.
  8. Launch the Android app. Reconnect to the previously connected NVR. No warning/validation/acceptance of the new self-signed cert is requested.
  9. Launch the Android app. Reconnect to the previously connected NVR. No warning/validation/acceptance of the new self-signed cert is requested.
  10. Go get some gin and cry :-(

Comments
Whilst I can understand an engineer may have decided to develop the mobile apps to always accept a cert the first time it sees it to simplify setup for customers whom will predominately have a self-signed cert on Unifi Video server, it must not accept subsequent certificate changes without warning to the user. Failing to do so, allows a MITM attack on any insecure networks.

I’d recommend a revised workflow such as:

  1. User connects to a new NVR for the first time. Certificate is accepted silently (or better, shows the fingerprint, aka SSH style).
  2. Mobile app stores the cert fingerprint against the NVR it connected to.
  3. Cert gets changed – whether intentionally by user, or unintentionally by attacker.
  4. Mobile apps warn that the NVR’s cert fingerprint has changed and that this could be dangerous/malicious. User has option of selecting whether they trust this new certificate or whether they do not wish to connect. This is the approach that web browsers take with changed self-signed certificates.

This would prevent silent MITM attacks, whilst will allowing a cert to be updated/changed intentionally.


 

Communication with Ubiquiti:

12th March 2016 Jethro Carr

hi Ubiquiti,

Can I please get an update – do you confirm there is an issue and have a timeframe for resolution?

regards,
Jethro

15th March 2016 Ubiquiti Response

Thank you for submitting this issue to us, and we apologize for the delay. Since launching with HackerOne we have seen many issues submitted, and we are currently working on reducing our backlog. We appreciate your patience and we’ll be sure to update you as soon as we have more information.

Thanks and good luck in your future bug hunting.

24th April 2016 Jethro Carr

hi Ubiquiti,

I’ll be disclosing publicly on 29th of April due to no action on this report after two months.

regards,
Jethro

26th April 2016 Ubiquiti Response

Thank you for submitting this issue to us, and we apologize for the delay.

We’re still reviewing this issue and we appreciate your patience. We’ll be sure to update you as soon as we have more information.

Thanks and good luck in your future bug hunting.