INCA Blog

Possibly the biggest misnomer of recent times is the term ‘zero-trust’, in relation to identity management and authentication of users wanting to access an organisation’s protected resources (computer applications, databases, sensitive documentation etc.). Vendors and industry commentators seem to see the term as referring to the brave-new-world in which current IAM and access control technology is dated and inadequate. This has been accompanied by an inability to describe what zero-trust really is and how it is applied. Zero-trust is not a technology, it’s not a solution, you can’t go to your favourite vendor and buy a bit of zero-trust. It’s a corporate strategy, it’s a reference architecture, it’s a foundational belief. You construct a zero-trust environment by adhering to set of practices that will, over time significantly reduce the vulnerability of your organization’s business operations.

The first step is to ensure a holistic approach to authentication and authorization service. There’s no point in establishing a strong authentication service for webserver applications while leaving the network segmentation relying on high-level group memberships.

Secondly, and yes – this is why it’s a misnomer, use a ‘trust-but-verify’ approach. When a particular data store is used as a source of authentication services, use another mechanism to verify it. This will typically use the person’s smartphone (push authentication for low assurance, facial recognition or fingerprint for higher assurance).

‘Zero-trust’ needs a corporate culture that values security and it requires a least-privileges approach to access control. Nothing new.

Edge Computing is one of those terms that mean different things to different people.

Its genesis is in the Operational Technology world. Typically OT networks were isolated from the rest of the world because 1) they needed protecting and 2) had so much mission critical traffic and low latency data in transit. In order to manage the exfiltration of data it became fashionable to establish a computing device at-the-edge that would ensure only aggregated data left the network and only access to supervisory processes was supported.

A typical Industry 4.0 environment will have a myriad of systems on the network all controlling manufacturing processes with supervisory systems to allow staff to monitor the production environment and receive notification of events as they happen. An Edge Computer allows supervision to occur and the amount of data being communicated back to head office to be controlled. Management don’t need to know how many work-processes have been completed, they just want to know how many finished products have gone into inventory.

Vehicles are another case in point. There is an enormous amount of processing occurring in a car, from the critical control messaging advising various components of their status, to trip monitoring and recording. Real-time external communications is becoming increasingly important as road assets become more intelligent and provide better traffic management to appropriately equipped vehicles. An Edge Computing device can make sure that just the service record is made available to the workshop.

Then there’s the home environment. With increasingly sophisticated devices capable of being controlled remotely, everything from air conditioners and lights, to charging stations and grid feed-in controllers, the home environment is becoming an environment that must be protected and controlled; and an Edge Computing device, typically the Wi-Fi router can help.

There are two main reasons for using an Edge computer:

  1.  Cost – being able to limit the amount of data leaving the network means that the bandwidth requirement is lower and the associated cost is reduced.
  2.  Security – isolating production equipment from administrative interference by providing access to just the data that is needed by management, means operational systems can be protected.


The provision of edge computing devices is a technology whose time has come.

Australia is finally putting some ‘runs on the board’ in the area of identity information. This is good news because collecting, storing and using identity information is becoming more important for a variety of reasons Australia is finally putting some ‘runs on the board’ in the area of identity information. This is good news because collecting, storing and using identity information is becoming more important for a variety of reasons:
  • There is a heightened concern about the privacy of personal information as we read about another breach or unsolicited sale of identity information.
  • We are becoming increasingly uncomfortable with what’s been called ‘object based media’ which tailors our on-line experience and modifies what we read depending on our interests, gleaned from our searches and ‘likes".
  • Organisations providing consumer or citizen services seek to improve their user's experience by eliminating username and password logins but at the same time try to gather as much information as possible about their clients.
So how has our government stepped up to the task of acquiring, managing and protecting our identity? The federal government’s approach is built around the MyGov initiative. It’s taken many years to get to this point.

The two main agencies working in this space are the Department of Human Service (DHS) and the Australian Tax Office (ATO), each following their own paths for their own purposes. The Digital Transformation Office, expanded and renamed The Digital Transformation Agency (DTA) was charged with managing the differences between the two departments and setting policy as to how a common identity management environment was to be deployed. They considered following the UK Verify model but that was eventually dropped. Political pressure made it impossible for an independently developed identity management solution to be deployed. Common wisdom and multiple consultant’s recommendations suggested a federated environment that would incorporate state-based identity providers such as Queensland’s CIDM service and ServiceNSW so that citizens with a registered identity with their state government would be able to use that identity to log onto federal systems.

But DHS won. The MyGov system is a closed authentication environment that requires service providers to register their application on the MyGov platform. While the platform has been opened-up to support other identity provider services, AustPost being the first, there is little incentive to sign up to any other service if you already have a MyGov account. Queensland is already decommitting from CIDM and focussing on the driver licence and 15+ card registrations that will be incorporated into an identity broker framework.

What’s more, the DTA is now working with the National Exchange of Vehicle and Driver Information (NEVDIS), originally established to facilitate cross-state sharing of driver licence and vehicle registration information with law enforcement officials, to allow them to get access to driver licence information. They particularly want access to the photo so that they can reach a level 3 assurance level; something the ATO wants for citizen access to more sensitive services. Driver licence photos are getting better, in Queensland they meet ICAO standards, which facilitates three-factor smartphone authentication as smartphone manufacturers open up their facial recognition technology.

But sharing of photos is a concern for Australians. We provide photos for the purpose of getting a driver licence, not for a central database to be used for other purposes to which we have not consented. This contravenes Australian privacy legislation.

Facial recognition simply needs a facial (visage) template that measures information such as the distance between the eyes, width and length of the nose, the mouth position and chin shape. This enables facial recognition but not image reconstruction. It’s also a lot less data to transmit and store. It is hoped that this is all that get's contributed to the DTA. It could be argued that since a template is a derivation of the photo it's not captured under privacy legislation.

So – it’s good that the federal government is finally moving ahead with an on-line authentication service, it’s just too bad it’s not a truly federated system, it requires service providers to be exposed via the MyGov environment and it’s hoped that the driver licence application process will soon close the "consent" objection to sharing visage objects.

Oh – on the topic of privacy, Australia has some of the best privacy legislation. It’s a shame the Australian Office of the Information Commissioner (AOIC) has not been funded to investigate and prosecute the many organisations flagrantly abusing consumer privacy every day. We are continually asked for more information than is necessary for the services we’re requesting, and organisations are not deleting information when they can’t be bothered to update it. And most Australian organisations are not capable of responding to a person’s request for access to their data and requests to correct errors (required under the legislation).

Many company privacy policy statements, a requirement under the legislation, are very poor and the number of breaches, with notification finally a legislated requirement, indicates that companies are not safeguarding the data they keep on us.

It’s also a shame that the Attorney Generals Department has not moved ahead with the Cross-Border Privacy Rules (CBPR). We need to plug-and-play in Asia yet we spend more time on Europe’s General Data Privacy Regulation (GDPR). Now GDPR is the gold-standard when it comes to privacy practices, but Asia consists of sovereign states that each set their own privacy regulation, nothing like Europe’s nation states that adhere to a common regulation. Again, AOIC’s role in CBPR needs funding.

So it’s a mixed report card for Australia; we’ve done some things right, we’re finally going to have an authentication system to access federal government services. It’s too bad that I must setup a MyGov account to do so, I can’t use my QGov account.

But that’s the reality we live with - political factions seem to trump logical decisions. 

 

One of the latest topics to be selected for media-mania is facial recognition. Can we of sound mind and technical education please provide a balance to the self-serving journalists who seek to promote their names through social media hype?

There are three areas of confusion that have surfaced over the past six months:

  • Privacy issues surrounding facial images

There are no privacy issues surrounding facial recognition. There are, of course, concerns regarding the storage and sharing of facial images that persons allowing themselves to be photographed as part of a registration process should question. But facial recognition uses facial templates (sometimes called facial signatures) and does not require transmission or storage of facial images.

  • Concerns regarding CCTV cameras

This item supposes that local councils are mapping our movements when we are caught on cameras in public spaces. The technology is not currently available to do this. It requires one-to-many matching and requires ICAO-grade images.

  •  Comparisons with the Chinese social credit program

Whatever you think of Beijing’s initiative to promote social harmony it has nothing to do with facial recognition – that just happens to be one of the technologies they purport to use. The only issue is whether or not democratic countries want to go down that route.

It’s important that technically competent people help to quell fear-mongering and ensure a level-headed approach as new technology becomes mainstream.

In helping people understand the technology it is important to differentiate between the two main types of facial-recognition, they are vastly different:

1. One-to-one

This is the area in which most change is occurring and where we are benefitting the most from a better user-experience. There are multiple use-cases, for instance:

-  SmartGate immigration stations. These are the automated devices used at border crossings that allow you, if you’re lucky, to enter a country without talking to a border-control officer. They work best in Europe where passports from a wide number of countries are accommodated. There are two steps to the process: you present your passport allowing the system to retrieve your facial template, and then a camera verifies that it is actually you travelling.

-  Windows Hello. After registering your face with your PC, and creating your facial template, subsequent logins will turn on the infra-red camera to verify your facial image even in low light.

This type of facial recognition is the future of authentication. Most new smartphones have strong graphic-processing capabilities and are able to positively identify you to a high assurance level. Many governments and commercial organisations want a higher level of assurance than most PIN-based or push-authentication systems can provide so this type of facial recognition has a bright future.

2. One-to-many

This is usually the type of facial recognition that garners the most interest and criticism from members of the public. It is widely used in criminal investigations where a visual image of an alleged perpetrator can be compared with police files of stored facial templates in order to identify a suspected criminal.

This type of facial recognition takes time and processing power; it is not suitable for authentication purposes. It has been trialled in multiple airports, to attempt to identify people on watch lists or individuals with red flag indicators from leaving or entering into a country. These trials have had very limited success because of high false negative rates.

So what should the technical professionals be recommending to our clients?

  1. When we allow ourselves to be photographed as part of a registration process i.e. obtaining a driver license, we should ensure we are satisfied with the privacy statement of the organisation involved. In most western countries privacy legislation allows companies and government to only collect the information they need for the transaction that a user is undertaking. They can’t collect information that just might be needed in the future or would be useful for their demographic analysis program. An organisation cannot collect a facial template if they don’t need it for authentication; and they can’t ask for a photograph unless it’s needed for the requested business process e.g. application of a driver license. If a facial template or a facial image is collected it can only be used for the purpose for which it was collected. Government cannot use driver license photos to authenticate citizens to government services, unless explicit consent is collected.
  2. We need to identify the current limitations of the technology. Much has been written about the ability to “fool” facial recognition systems with a modified photograph. It seems that a suitably “doctored” image can be used to cause a false positive. We would be remis, as with any authentication mechanism, if we did not assist our clients in identifying situations in which a technology does not provide the required level of security.
  3. Perhaps the most important advice we can give, however, is the potential for facial recognition to radically change user experience in the future. Users of Windows Hello won’t go back to passwords, PINS or fingerprints. Facial recognition is so simple and exceeds most security requirements that it is the future for authentication on PCs and laptops, and it will be the authentication tool of choice on smartphones too, with the FIDO Alliance supporting a facial recognition certification program.

No – passwords aren’t dead, but facial recognition is one more nail in the coffin.


Thx.
Graham

Most developed countries have enacted privacy legislation with the intent to protect their citizens from bad corporate practices that may either deliberately or inadvertently release their personally identifiably information (PII) to unauthorised persons. While this is obviously a well-intentioned activity it does have a commercial impact. Companies wanting to transact across sovereign borders must ensure they adhere to privacy legislation in the countries in which they do business and individuals providing their PII to foreign companies need to be confident that their private data is being adequately protected in the foreign jurisdiction.

Europe has addressed these issues via the General Data Protection Regulation (GDPR) initiative which harmonises privacy legislation across European Union countries. The main driver for the GDPR is protection of individuals’ privacy. The legislation requires organisations to establish data controllers for repositories of PII and to seek consent for the use of PII within their business processes. GDPR also provides for recourse in the event of contravention of the regulation. Indeed the penalties can be quite severe with enforcement agencies in each country ready to investigate, and if necessary prosecute, those that violate the legislation.

In the Asia Pacific Region the approach has been quite different. It is unrealistic to expect a harmonisation of privacy regulation across countries in the region so the Asia-Pacific Economic Cooperation (APEC) established the Cross-border Privacy Rules (CBPR) system. Countries joining the CBPR must evaluate their privacy legislation against the 9 principles of the APEC Privacy Framework and then provide a mechanism for companies to be ‘certified’ by an Accountability Agent as being compliant with the CBPR.

While both initiatives seek to protect private data they are very different in their approach. GDPR relies on a legislative mandate that enjoins member countries in a prescriptive solution. It is based on homogenised legislation that ensures similar treatment of infractions regardless of where they occur in the European Common Market. By contrast participation in the CBPR system is entirely voluntary, it is based on self-assessment with 3rd party verification. It relies on negotiated settlement of alleged contravention and imposes no restriction on member countries regarding their local privacy laws. In order to participate a country must have enacted privacy legislation; it is a pre-requisite because member countries must map their local law to the CBRP Privacy Framework as a step in their application to join the initiative. Some Asian countries are not in a position to consider CBPR because they lack the legislative framework to participate.

So – GDPR is predicated on tight coupling between member states that enables a strong legislative response to the task of data protection. CBPR accommodates a loose coupling of member countries imposing a framework that enhances cross-border trade and provides some recourse for individuals in the case of privacy regulation contravention by a foreign participant.

 

GDPR

CBPR

Program Characteristics

Tight-coupling of European member states

Loose-coupling of APEC member countries

Legislative Framework

Prescriptive, based on a single privacy legislation

Guidance, accommodating multiple privacy laws

Recourse for contravention

Punitive, with significant penalties

Negotiated, with local agreements for redress

Table 1 - Comparison GDPR & CBPR

While GDPR and CBPR, by necessity take different approaches, both serve to raise awareness of privacy issues and raise trust in the Internet as a vehicle for digital commerce.
Most governments are rushing to deploy on-line services. The have no choice: it’s too expensive to maintain other channels, and millennials would have it no other way.

This means that there is a need for an authentication service that will provide access to government services and most governments are addressing the issue by developing a central facility that becomes a ‘one-stop-shop’ providing access to services across multiple departments or ministries.

There are basically two frameworks being adopted: a persistent ID system that establishes an identity store and a transitory ID approach in which no government ID store is required. A summary of the benefits of each approach are:

Persistent ID

This approach is by far the most widely deployed. In this instance a government agency establishes a central identity provider service to authenticate all users access government on-line services. Governments have a large amount of information that they necessarily store on their citizens. They issue driver licences so they know where we live, our age, what we look like and our driving history. They track medical expenses so they know how healthy we are and if we have any chronic illness. Tax returns advise on how much we earn and details such as our investments. But while government hold a wealth of information on citizens it’s quite fractured with each department or ministry maintaining their own records. There is typically little ‘sharing’ of information which means that identity data cannot be leveraged to the degree it could be. One issue is privacy legislation which restricts data-sharing without consent.

That means that, to develop an authentication mechanism for citizen access across multiple departments, government typically establishes a purpose-specific repository, to authenticate users before redirecting them to the requested service. The issue then is to associate an authenticated user to their record(s) within the department or ministry they are accessing. If a citizen is renewing their driver licence, either the authentication facility needs to pass through the driver licence number, if it’s available, or the target department will need to employ other attributes to establish the relationship. Another issue is harmonisation of common data. For instance, when a citizen moves house there is a need for a ‘change it once’ approach whereby an address change is propagated to the departments that maintain address detail. Another approach is to federate the identity data across departments, and levels of government, but this requires a level of co-operation within government that typically does not exist.

Transitory ID

The alternative to a persistent ID system is what we call a transitory ID framework in which the government does not create a data store of citizen identity information on their citizens, they rely on third parties who specialise in providing such services. The major benefit of a transitory ID facility is the elimination of the liability associated with maintaining a data repository of PII. In most jurisdictions there are severe penalties for unauthorised release of identity information and this represents a significant risk that is avoided if government relies on third-parties. It also allows citizens to select the service provider of choice for the storage and maintenance of their identity data.

But there are some drawbacks:
  • Since there is a reliance on third parties there is a need to establish rules; and a need for some form of conformance testing to ensure adherence to the rules.
  • There’s a cost component in that third-party identity providers typically want to be compensated, so some form of payment system is required and some subsidisation in the commencement phase is required, until a sustainable level of transactions has been reached,.
  • Since the third party will typically not have identity attributes to allow departments or ministries to establish relationships i.e. vehicle registration numbers, the target agencies need to match a user to their record(s) within the department so that the required service can be provided.
The most successful deployment of a transitory identity provider system is in the UK. There are several reasons for this:
  • They have a large enough population to support multiple third-party suppliers.
  • British citizens are fiercely protective of identity information and don’t want government to have any more of their identity data than they have to.
  •  The UK has a centralised form of government that makes it easier to enforce across government (there’s already been a large ministry that tried to establish their own authentication mechanism but they were encouraged not to).
Citizen identity management is an interesting area to watch. It will only grow in importance because on-line services continue to grow in importance and some innovative use of AI is expected that will make our experience with government more pleasurable. Won’t that be refreshing?

For the past 5 years cloud services have grown to be ubiquitous, secure and high-performance. Yet just yesterday I was talking to a friend who was lamenting the decision he had to make at work regarding deploying a Microsoft Project server on AWS or Azure. He needed to provide access to team members from two organisations and his company would not allow external people to access their on-premise project server. The cloud is the only way to go for such an application. But while that's so obvious there are some caveats that need to be observed. 

It's important that my friend select a cloud service provider (CSP) appropriately. He needs to evaluate prospective suppliers from a operational risk viewpoint - can you get your files back when you part ways with the CSP, technical viewpoint - does the CSP provide adequate security and a legal point of view - are the licence terms suitable? 

Then a decision needs to be made on the identity service to authenticate users to the site. Is an access control list going to be maintained on the CSP's site (bad), will there be a synchonisation to AD (not much better) or will the company establish an identity provider service in the Cloud? In this instance a cloud-based federation service to which the other company can interface would be a good idea.

The technolgy is here folks - let's just use it.

Thx.

Graham

 

Cloud - Strategic or Tactical?

Most companies do not plan their migration to the Cloud. Perhaps as a result of a question by upper management, they find-out one day that they have multiple users of cloud services in their organisation. While each application was a good idea at the time such a disparate approach means that there is no strategic vision, un-coordinated service provision, a significant training impost and little governance over Cloud-based applications and infrastructure.

Identity Management

One aspect of IT infrastructure that must be addressed when embracing Cloud services is identity and access management. It is important that an efficient mechanism be adopted to manage access to applications. It is all too common to provide single sign-on for on-premise applications and only same sign-on for Cloud applications. But this is an aggravation for users and is usually accompanied by poor password management and less than real-time identity provisioning.

Companies should analyse their identity and access management requirements and ensure Cloud applications adhere to them. In an Azure environment this means selecting the level of integration. For a basic configuration the DirSync tool will synchronise on-premise identities from AD to Azure AD (AAD) and password hashes can be stored in AAD for same sign-on. While this can be satisfactory for support of legacy applications to the Cloud it does not provide the session management that single sign-on requires. Microsoft’s solution is to install ADFS on-premises to provide federation services and automatically authenticate users with an active session to Cloud applications. Third party federation services should also be considered since they can be less resource intensive and more flexible.

For naked-Cloud users the issue becomes more of a concern because of the proliferation of applications and the lack of Azure AD. This means that identities are synchronised into multiple environments causing security and privacy concerns. Another option is to select a Cloud-based identity provider service (IDP) and require all cloud-based applications to use it. This means that Cloud applications must adhere to the SAML protocol and that the IDP must provide the required attributes. In terms of selecting the IDP this will normally be dependent upon the main provider of Cloud applications. If Okta is selected then centralising on this service might be indicated. If Salesforce is used, embracing their identity service should be considered. Another solution would be to establish your own IDP on a service such as PingFederate and require all applications to interface to it.

A note on AWS identity services: a typical recommendation is to adopt a VPN approach for hybrid scenarios. This means the Cloud environment will simply be an extension of the on-premise environment and an AD instance in the Cloud becomes part of the organisation’s AD forest. In this circumstance AWS can also offer the Microsoft Word applications in a standard format (not Office 365). This might be attractive for some organisations since it means that staff do not need to be trained on the use of Office 365. It also means that performance issues will need to be to be addressed due to the verbose nature of Windows applications. In some cases a virtual desktop integration (VDI) approach will be warranted. In some cases a dedicated “pipe” into the closest AWS data centre will be a better solution.

C U in the Cloud

Graham Willamson

Organisations are somewhere along the continuum from fully manual identity management to fully leveraged identity and access management.

The manual organisations have no interface between their HR systems and downstream applications. System administrative staff must enter user details into each system to which an employee requires access and there’s no reporting or governance capabilities. These companies are not only wasting time with data entry, creating errors that cause time wastage across the organisation they are also encouraging security problems with de-provisioning, removing entries when staff members leave, generally not occurring or not occurring in a timely fashion. Single Sign-on is only a dream.

In organisations at the other end of the spectrum an employee’s details are entered once, usually into the HR system or, better still, the recruitment systems, and then propagated to the SSO facility or account registration processes for relying applications. Staff have access to the application they need on the first day at the job and are automatically removed on the last day. Managers get regular reports on the access granted to their subordinates and management get governance report on provisioning activity, authorisation activity and any denied authorisation events. It is to these organisations that this blog is addressed.

Arguably the next big thing is Dynamic Authorisation Management (DAM). If you’ve got a good identity and access management environment you should leverage this infrastructure for fine-grained access control on a real-time basis. With a well-designed DAM environment a user accessing an application will have the request re-directed to a decision engine that will interrogate a policy store to determine if the user should get access and the level at which that access should be granted. The configuration looks like:XACML

When the user attempts to access the protect resource the enforcement point, typically Java code or a .NET library sends a request to the Decision Point which retrieves attributes from the Information Point, typically directory, and runs through the policies that have been entered into the system to determine the user’s rights to access the resource in question.

The beauty of this is it happens in real-time so if a person has been removed from an access group they will immediately be refused access to the resource in question. The other big benefit is the application of a consistent set of policies, typically managed by the business units rather than system administrators.

Although there are many variants there are basically two configuration models that can be used to provide this fine-grained control. The discrete authorisation device is a stand-alone, policy-driven decision engine that services any controlled application or device on the network. The other model is the gateway device whereby an API gateway controlling communication between systems, applies the access control policies. Both configurations use a policy store and a repository of identity attributes. Some products require the policy attributes for users to be stored locally, some will access the organisation’s identity store in real-time. Some products are designed for business unit management of policies, others require a system administrator to manage policies.

Regardless of the solution selected, there is little doubt that dynamic authorisation management holds significant benefit for the advanced organisations that can leverage their identity management infrastructure to significantly tighten their access controls and data loss prevention environment.

Stay Safe - Graham