Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

30 September 2021

Outrunning the lions…


I’m sure you’ve probably heard the fable of the lion and the camera crew, where the moral of the story is that to survive it’s not necessary to outrun the lion, just the other people. Which is all very profound and wise (and quotable).

Over the years, I have helped hundreds of organisations with their security, and a question I am asked repeatedly is, “how do we compare?” Now, that’s not actually an illogical or unreasonable question to ask, especially by a board member. They are of course used to having their trading results made public, and compared to competitors. And in this regard, their survival as an organisation (and as a leader) indeed depends on them being ahead of their peers.

However, security isn’t like trading figures, and being ahead of their peers probably isn’t going to save them from suffering an incident.

Because the important thing to remember is that in the context of security, the lion isn’t restricted to eating one person at a time, and there are in fact an almost unlimited number of lions.

Cheer up. It’ll all be ok. Probably. ;)

21 January 2021

Google Cloud Service Account Authentication

Whilst there is a lot of documentation available on the Google Cloud sites, it is hopelessly disorganised, the examples contradict each other, and in general it is very confusing. So, after spending hours trying to understand what was required to authenticate a service account and get an oauth2 access token, I thought I’d write up my notes into a technology-agnostic recipe to save you some pain.

To be able to use the APIs at all, you’ll first need to go into the Google Cloud console, enable the APIs you wish to use [1], and create a Service Account, assign it the permissions you need, and then create and download a key file for it [2].

If you open the keyfile, you’ll see that it is a standard JSON object, with a collection of values, but the only ones you’ll need are client_email and private_key.

The request itself is a standard HTTPS request which contains a JSON Web Token (JWT) [3], that needs to be constructed using the following approach: 

 

JWT Header

{
    "typ": "JWT",
    "alg": "RS256"
}


JWT Payload

{
    "iss": "billy@domain.org",
    "aud": "https://oauth2.googleapis.com/token",
    
"iat": 1611220000,
    
"exp": 1611220030,
    "scope": "https://www.googleapis.com/auth/cloud-platform"
}


When constructing your own JWT payload, you’ll obviously need to use different values to suit your needs. The iss value will be the client_email value from the key file, the iat will be the current time as a unix epoch seconds value, and exp will be the iat plus a suitable duration to allow you to use the JWT (30 seconds is more than enough), and the scope will be a space-delimited list of Google Cloud scopes [4].

Sign the JWT using the private_key from the key file, using RSA, SHA256 and PKCS1 padding.

Finally, make an HTTPS POST request with the JWT in the assertion parameter of the body, and the oauth2 access_token should be returned in the response:

POST /token HTTP/1.1
Host: oauth2.googleapis.com
User-Agent: Mozilla/5.0
Accept-Encoding: br,gzip
Accept-Language: en-GB,en;q=0.5
Accept: */*
Connection: keep-alive
Content-Length: 822
Content-Type: application/x-www-form-urlencoded

grant_type=urn%3aietf%3aparams%3aoauth%3agrant-type%3ajwt-bearer&assertion=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJiaWxseUBkb21haW4ub3JnIiwiYXVkIjoiaHR0cHM6Ly9vYXV0aDIuZ29vZ2xlYXBpcy5jb20vdG9rZW4iLCJpYXQiOjE2MTEyMjIxMDIsImV4cCI6MTYxMTIyMjEzMiwic2NvcGUiOiJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9hdXRoL2Nsb3VkLXBsYXRmb3JtLnJlYWQtb25seSBodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9hdXRoL2NvbXB1dGUucmVhZG9ubHkgaHR0cHM6Ly93d3cuZ29vZ2xlYXBpcy5jb20vYXV0aC9uZGV2LmNsb3VkZG5zLnJlYWRvbmx5In0.A-eWGCpR5qDvHyxSDtqAcUMrPRSYhVAlXmfJol0kFMAyMbqdDBFMMofevnhjDBLNqXu4YJchFLG5Yb3BgAW78bMX7VDZVeHvn0TBI4qb8-_rfe2YEWZCKegXHF_56q5_i3iGjVgEKVMwFWK6hTGToIjnb-u3ir0mPbS5y5BhufD-054YhQXLqHEIMpRIRg10SqKVor7CLDJCbkRCbfH7auSXIhRV8_ybHwsck1bE_BFbThZ5dSLpsi2Y28vYiJp_JzY2oyHTGc2P98JKhR-CXvJba_o1aapm8XS77CH3V4Nlu01HY5THwl-UVx_c8KQUPf6eNEscyHec-mt_C6ypjw


  1. https://cloud.google.com/apis/docs/getting-started#enabling_apis
  2. https://cloud.google.com/iam/docs/creating-managing-service-accounts
  3. https://jwt.io/introduction
  4. https://developers.google.com/identity/protocols/oauth2/scopes

6 November 2019

MySQL SSL configuration


The defaults for initiating a connection to a MySQL server were changed in the recent past, and (from a quick look through the most popular questions and answers on stack overflow) the new values are causing a lot of confusion. What is worse is that the standard advice seems to be to disable SSL altogether, which is a bit of a disaster in the making.

The typical error message encountered will be something along the lines of:

WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.

Now, if your connection is genuinely not exposed to the network (localhost only) or you are working in a non-production environment with no real data, then sure: there's no harm in disabling SSL by including the option useSSL=false.

For everyone else, the following set of options are required to get SSL working with certificate and host verification and the weak protocol options disabled:

  • useSSL=true
  • sslMode=VERIFY_IDENTITY
  • trustCertificateKeyStoreUrl=file:path_to_keystore
  • trustCertificateKeyStorePassword=password
  • enabledTLSProtocols=TLSv1.2

So as a working example, let us suppose you wish to connect one of the Atlassian products (Jira, Confluence, Bamboo etc.) to a MySQL server using SSL, then you'll need to follow the following broad steps.

  • First, make sure you have a valid certificate generated for the MySQL server host, and that the CA certificate is installed onto the client host (if you are using self-signed, then you'll likely need to do this manually, but for the popular public CAs it'll already be there).
  • Next, make sure that the java keystore contains all the CA certificates. On Debian/Ubuntu this is achieved by running:
    update-ca-certificates -f
    chmod 644 /etc/ssl/certs/java/cacerts

  • Then finally, update the connection string used by the Atlassian product to include all the required options, which on Debian/Ubuntu would be something like:
jdbc:mysql://mysql_server/confluence?useSSL=true&sslMode=VERIFY_IDENTITY&trustCertificateKeyStoreUrl=file%3A%2Fetc%2Fssl%2Fcerts%2Fjava%2Fcacerts&trustCertificateKeyStorePassword=changeit&enabledTLSProtocols=TLSv1.2&useUnicode=true&characterEncoding=utf8


23 October 2017

Android Emulator Recipe


When security testing an Android application, sometimes you'll want to do so from a PC that doesn't have the full development environment already installed.

Nota bene: if you wish to run the Android emulator within another virtual machine (such as VMware), you must first enable the host VM's option to virtualise VT-x/EPT, plus add an extra 2GB of memory and two more cores to cover the emulator’s requirements.

Install the Android command line tools [1] to C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools.

Create empty directories so the sdkmanager command line tools work:
mkdir C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platforms
mkdir C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platform-tools


Install the Intel HAXM engine [2].

Update the SDK tools
mkdir C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools.temp
xcopy /e C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools.temp
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools.temp\bin\sdkmanager --update
rd /s /q C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools.temp


Download the emulator system image (adjust version to suit needs):
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools\bin\sdkmanager "system-images;android-23;default;x86"

Create the emulator virtual device (AVD):
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\tools\bin\avdmanager --verbose create avd --force --name test --device "7in WSVGA (Tablet)" --package "system-images;android-23;default;x86" -b x86 -c 64M

Edit the AVD configuration file and enable the system keyboard:
notepad C:\Users\%USERNAME%\.android\avd\test.avd\config.ini
hw.keyboard = yes


Start the emulator (in this case routing all outbound web traffic through a proxy):
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\emulator\emulator -avd test -verbose -skin 768x1280 -memory 2048 -netdelay none -netspeed full -gpu angle -no-audio -no-boot-anim -dns-server 8.8.8.8 -http-proxy http://proxy:8080

Install a MITM certificate into the emulator:
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platform-tools\adb -e root
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platform-tools\adb -e shell "mkdir -p /sdcard/Certificates/"
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platform-tools\adb -e push C:\Users\%USERNAME%\Desktop\cacert.cer /sdcard/Certificates/cacert.cer


Import the certificate by selecting settings, security, credentials, then install from SD card.

Install an application package into the emulator:
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platform-tools\adb -e install -r C:\Users\%USERNAME%\Desktop\application.apk

View the system log:
C:\Users\%USERNAME%\AppData\Local\Android\Sdk\platform-tools\adb logcat

References

  1. https://developer.android.com/studio/index.html
  2. https://software.intel.com/en-us/articles/intel-hardware-accelerated-execution-manager-intel-haxm


22 August 2016

SSL is mostly just a false sense of security

Unless you live in a cave somewhere, you’ll be aware that SSL (and by extension TLS) has had a rough life. For a start, there has been a long series of critical flaws in the various incarnations of the protocol, as well as the common implementations, like OpenSSL.

However, it doesn’t stop there. Even on the rare occasions where we are passing through the eye of the storm, and there are no known protocol and implementation flaws, SSL still consistently fails to offer any kind of real-world security due to the way it has been configured. Being as it is, complex and fiddly, there are literally dozens of bad choices that can be made when selecting SSL features at development or configuration time. And what’s worse, many common products are shipped with insecure features enabled by default, or a quick look through their support forums will show recommended settings that actively disable key parts of the SSL security model.

For example, one of those key components is the way that SSL uses certificates to authenticate one or both ends of a connection. Now, assuming the trust model with the certificate authority isn’t compromised in some way (which happens far too frequently) each end of the connection will use the certificate to ensure that it is communicating with who it intends. That is of course if the developer or systems administrator didn’t switch this functionality off because it was fiddly to make work reliably. Ooops.

The truth of the matter is that with certificate validation switched off, SSL is pretty much worthless, as anyone who can eavesdrop on your connection can now also use a man-in-the-middle approach to gain access to the data inside.

How much of a problem are misconfigurations really? A couple of stats for you:

  • as of today, the Trustworthy Internet Movement shows that of the top 1m web sites, 57% have inadequate SSL security [1], most of which is down to misconfiguration; 
  • a google search for “disable ssl certificate check” shows 500k hits from sysadmins and developers busily looking for ways to switch the functionality off [2].

The truth of the matter is that the biggest threat to your SSL isn’t a bunch of faceless hackers, but is instead sitting right now in your IT department. And what’s worse is that you probably think they’re doing a great job.

References


  1. https://www.trustworthyinternet.org/ssl-pulse/
  2. https://www.google.co.uk/search?q=disable+ssl+certificate+check

16 August 2016

Call me, maybe?

There’s one thing to be said for the world of Information Security, and it’s that it rarely stands still for a moment. New products and technologies are released with relentless regularity, each with its own particular set of security challenges to first understand, then protect. Never a quiet moment.

But as new technologies are introduced, old ones are often superseded; relegated to the “legacy” bucket. But just because they are no longer the latest hot topic, it doesn’t mean that they don’t still pose a significant risk to the organisation.

One such technology is the traditional telephone, or as it likes to be formally addressed at black-tie events, the Public Switched Telephone Network (PSTN). Back in the day, the media was awash with stories of hacking attacks that were launched over the telephone network. In fact, the high-profile hack that led to the drafting of the UK Computer Misuse Act (CMA) was itself delivered over the telephone, using a modem.

The Internet has changed all of this though, as in most cases it rightly takes the majority of the focus when it comes to security. But in this shift, a lot of organisations seem to have forgotten about the PSTN. Which can be a bit of a problem, as unfortunately the attackers haven’t!

The fact is that the legacy telephone system remains a rich target for an attacker. Dozens of critical devices are still installed with a remote administrative interface connected to an old-school telephone line. Systems like the burglar alarm, door entry systems, the PBX itself, video conferencing, SANs, and heavy machinery such as lifts, etc. Any one of these could be available, and all that is often required is for an attacker to connect to the right telephone number, then enter the default credentials for the device.

There was a time when most organisations would include their external telephone connectivity as part of their security testing programme (a “war dialling” exercise), but this seems to be a rarity these days.

Maybe it’s time for you to get a bit more old-school.

15 August 2016

Isolate the stupid



A while back, I wandered straight into the middle of a conversation between colleagues and overhead one of them say the wonderful phrase “isolate the stupid”. To be fair, I have taken it completely out of context of the original conversation, but I liked the phrase so much I thought I would use it for my own nefarious ends. Muhaha.

Over the years, I have regularly been called upon to provide help to organisations that have suffered a breach, and need to quickly find out what happened so that they can retrospectively bolt the door (so no more horses can escape).

A common contributing factor I often see in this kind of situation is a huge, flat internal network structure. One that mixes all classes of device together on the same logical wire: servers, desktops, peripherals and (horror-of-horrors) bring-your-own devices. In this situation, all it takes is one stupid mistake, such as a user clicking on a misleading phishing email, and the attacker suddenly has unrestricted access to the whole internal network. Game over.

In security parlance, compartmentalisation is the concept of breaking environments into discrete, logical components, whereby a failure is contained from spreading. In almost all these situations a modicum of compartmentalisation would have either prevented, or greatly reduced, the impact of the breach.

So there you have your top tip for the day: isolate the stupid.

12 August 2016

Begin at the begining

It is a rare week that passes without someone asking me, “What is the best way to get started as a security consultant?”.

However, before I give you my answer, I feel I should first point out that everything I’m about to write is obviously just my personal opinion, which you are of course entitled to take with the appropriate pinch of salt. I would expect that if you asked someone else who was recruiting the same question, they might have very different things that they are looking for. L’acheteur se méfiera!

Onward to my own particular answer though. For the last twenty years, I have personally read thousands of CVs and interview hundreds of people that are looking to make a start in the security industry. An industry which is unusually demanding of its consultants: requiring both extreme breadth and depth of knowledge. Knowledge that is built up in layers, one upon another, with each new layer intimately dependent on the previous one.

Many of the people I interviewed have incredibly polished and impressive CVs, complete with long lists of skills, credentials and training courses. Alas though, once the interview starts it is common to find that there is no substance behind the polish. The skills lists are just an aspiration; no real knowledge underpins the claims.

For someone starting out, I would say the most important thing to do is to make sure you understand the basics really well, and if you don’t know it really well, leave it off your CV. There is no point learning about XSS if you don’t understand HTML. No point in learning HTML if you don’t know HTTP. No point in learning HTTP if you don’t know IP. No point in learning IP if you don’t understand basic maths and technology concepts like modulus, endian-ness, and non-decimal radix.

Don’t attempt to run before you have mastered walking. Begin at the beginning…

4 August 2016

Cisco ASN.1 vulnerability (cisco-sa-20160721-asn1c)

Just in case you missed it, Cisco has recently announced a critical flaw with their code that affects a range of products [1]. However, whilst there will undoubtedly be a lot of discussion around the detail of the vulnerability specifics in the coming weeks (especially if someone turns it into a working exploit), I think that it actually raises a few broader questions that are worth exploring too.

Firstly, is the use of language in the advisory itself. The headline says it all really: Cisco is clearly placing the blame at the feet of Objective Systems, the supplier of the third-party code (who has also released their own advisory too [2]). This is a bit like a car manufacturer blaming the gearbox subcontractor if there’s a recall. Now, it is true that it might be due to a flaw in the third-party code, but as with the car analogy, the ultimate responsibility for ensuring that a product is fit for purpose, lays with the car manufacturer. Even a half-arsed root cause analysis for how the flaw made it through to a finished product (then remained undetected through four major product releases), should quickly show that it is ultimately nothing to do with the third-party code at all, but instead lays with failures in quality assurance.

Secondly, although the word compiler is used liberally in the Cisco advisory, the original advisory released by Lucas Molas [3] (the researcher who found the flaw), is very clear that the root cause is in a runtime library. So it is just another TP code issue, rather than an esoteric compiler bug that is being hinted at.

Thirdly, this is a common library that is used in a number of products, so don’t expect the fallout to be limited to Cisco. The CERT advisory [4] already lists a number of vendors who are likely to be affected, so it would be wise to expect this one to have the potential of snowballing in the coming weeks. You can almost feel the hushed silence, as the ambulance-chasers dust off their reverse-engineering toolkits and get ready to go to work.

Fourthly, speaking from the perspective of someone who has written an ASN.1 interpreter from scratch, it is complex and fiddly; which means easy to get wrong. So it comes as no surprise that this isn’t the first time that there has been a critical issue that has been found in third-party ASN.1 code which is common across a range of platforms. Does anyone remember the huge batch of ASN.1 issues about 15-years back that were discovered by the University of Oulu Secure Programming Group [5]? Déjà vu baby!

Finally, and most importantly, the description of the vulnerability provided by the researcher shows that the root cause lays in parameters that are used without being first sanity checked. In fact, it’s a straightforward boundary condition that overflows a 32bit integer, which really is pretty much the lowest of-the-low-hanging-fruit when it comes to unit testing. So I would have to wag my chubby finger at both Cisco and Objective Systems and question their approach to QA in their development cycle. This isn’t rocket science people!

References


  1. http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20160721-asn1c
  2. http://www.obj-sys.com/blog/?p=949
  3. https://github.com/programa-stic/security-advisories/tree/master/ObjSys/CVE-2016-5080
  4. http://www.kb.cert.org/vuls/id/790839 
  5. https://www.sans.org/reading-room/whitepapers/protocols/snmp-potential-asn1-vulnerabilities-912