Know what you are looking for.

Metal detectors and baggage scanners (X-Ray machines for luggage) are a common sight at public places in India – whether you go to sports stadium, a Mall, a hotel, a government office or for that matter even private offices and Business Centers. Airports are where I sight them globally.

However, I get a feeling of thoroughness only with the airports officials at these security check points. At most places, I notice, the officers sitting on the screens of the baggage scanners are not even looking at the screens – which makes me feel uneasy (Will refer to this later again as Experience 1). I don’t like being frisked multiple times a day, but when I pass through a security check and I know I have not been frisked properly – I know the danger I am entering into (Will refer to this later again as Experience 2).

The routine is different for different places. Mostly – my car’s boot space gets checked, there is a mirror pushed under my car’s bonnet to check if there is anything stuck under the car (Exp 3)- and then I am asked to give my luggage and even phone away before passing through the metal detector and then my luggage is handed over to me with a warm smile (Exp 4).

These officers seem to be trying to do a thorough check. And these ones (Exp 3 and Exp 4) interest me. The ones that were mentioned in cases Exp 1 and Exp 2 seemed totally hopeless – I absolutely have no respect for a person, who don’t do their work properly. As they say – you had one job.

But at this point – I am thinking – do any of these 4 set of folks know what their job is. So I talk to them – the response is same everywhere – we are looking for items forbidden to carry. Okay – and what could that be. Well, they are obvious ones – Guns, bombs, knives, Liquids etc.

My next question to them is – do you know what a bomb looks like? or a dismantled gun looks like? They start to smile – because obviously, none of them have ever seen a real bomb. I don’t want to come across as a depressed or a lunatic who is planning something so I don’t ever ask more than 1-2 questions at a place and move on. Now, after so many experiences, I ask just 1 question  – do you know what you are looking for? The answer is mostly a smile or “kya madam” (which in my English translates best to “C’mon Mam”)

At most airports – I find officers almost intimidating – their process being more lengthy and apparently rigorous – a bit black box kinds too. One doesn’t easily get to see the scanned images – the officers are glued to the screen – And to their credit – they sure find things every once in a while. But the investment is huge. The process again remains standard – not intelligent all the time. Makes me want to say an overkill sometimes, but concerned folks say – when it is about safety better safe than sorry. Not sure, if they are even looking for an optimum solution.

As I kept probing on my own uneasiness with the fake security check I get at most places and on the other side –  overwhelming & intimidating security checks at the airports and slowly I begin to feel, this is so similar to my world of Software Testing. Do testers know what they are looking for? More importantly, can they identify a risk if it is not shaped as they are expecting it to be (mostly in their limited/fixed test cases).

  • Most testers perform testing as a ritual they have to execute, in certain order
  • More often than not Testing is in place only to put a check and say – yes we do it
  • When Automation comes into play – most testers don’t know how to make the best use of it
  • Mindless automation – again as a “must do” procedure is applied. No one is looking at the scanner screen.
  • Garbage in Garbage out Automation keeps continuing. EVERY LUGGAGE should pass through the scan. But the story ends there.
  • The regular beeping through the metal detector or not beeping at all – doesnt ring a bell to the executives. Because of so many false positives – no one bothers to check eventually. The need is to continuously upgrade the system – but it is so much of work each time that they just let it go.

Comparisons could go on…

This is where I feel hopeful about software testing more than the physical security check world – because we seem to be adopting “AI”  to keep training our systems to understand how to segregate defects from those that are not. As we begin to use more artificial intelligence in our automation and we train our verification scripts to update as per the changes in the applications, our overhead for maintenance shall reduce to minimal.

Dont get me wrong – there will ALWAYS be a need to humanly explore the unknowns but our effectiveness in distributing the knowledge of the newly explored unknown, into the whole system quickly and making it a known quantity to our testers and application owners will bring a significant success to the business owners in terms of reduced risks and reduced time to market with minimum investment.

If you wish to learn more about what you should be looking for in software and how you can reduce risks in your application without creating huge technical debt in automation and yet reducing time to market (incrementally), you could do these:

  1. Talk to me at
  2. Study testing, learning critical thinking and uncovering risks at and  — Infact try registering in one of their classes.
  3. Explore test tools like,, saucelabs, applitools, tricentis.
  4. Explore training and webinars with and . They also have some of the best conferences and meetups – full of latest trending content.

These are absolute top ones that come to my mind as I am typing. This wasn’t how I had planned to end the blog, hence a very limited list.

If you explore, you will find many more leaders and platforms. What is important is – to learn. So you know what you are looking for.

Cloud Computing and its Security Risks decoded for testers


Today Cloud computing or rather just “Cloud” has become a buzzword that everyone wants to speak, hear, use and do all they can with it. Though at other end most of us as are still grappling with the definition of cloud computing , its structure and characteristics, its deployment and the new dimensions of challenges it bring in terms of security and performance.

Lets figure out here what exactly cloud computing is, what are its possible structures prevailing currently, and the challenge these pose around security and certain compliances requirements. Would also like to highlight the complex issues related to adoption of Cloud and its relation with the way its deployed and the geographical location of the same. Though its an evolved technology, there are still issues and areas which need to be worked upon and matured further in order to develop a ecosystem for the cloud computing.

What is Cloud computing? –  Cloud Computing is rightfully used as a synonym for Infrastructure technologies set up.  And truly speaking Cloud has indeed bought a paradigm shift for the way today Infrastructure is setup in IT industry. The term Cloud is basically an abstraction of underlying infrastructure [1].  And to define it a little more elaborately we can put it like “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [2].

As with any new technology Cloud Computing brings enormous and new opportunities along with new risks. The infrastructure pool setup with cloud computing offers a cost effective solution to meet business requirements but brings with it the risk of moving sensitive data and mission critical applications altogether to a completely new and still emerging technology. However, how much real risk does this pose would be determined by the risk tolerance of the business adopting the cloud.

To assess the kind of security issues posed by a particular cloud implementation, one needs to study the nature of the formation of cloud, in terms of the model adopted and the flexibility it offers. If under any circumstances like the cloud vendor doesn’t continue to provide the service or for other business reasons – the business needs to move to another provider, it’s important to know beforehand how effortlessly this could be achieved and how the new vendor could provide a seamless disaster recovery service to maintain business continuity.

NIST defines cloud computing by 5 essential characteristics, three cloud service models and four cloud deployment models.

Essential Characteristics of Cloud Computing

 Cloud services exhibit five essential characteristics that demonstrate their relation to, and differences from, traditional computing approaches:

  • On-demand self-service – Deriving server time and network storage as needed automatically, without requiring human interaction with a service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs) as well as other traditional or cloud based software services.
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a degree of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources, but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines. Even private clouds tend to pool resources between different parts of the same organization.
  • Rapid elasticity. Capabilities can be rapidly and elastically provisioned — in some cases automatically — to quickly scale out; and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource usage by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, or active user accounts). Resource usage can be monitored, controlled, and reported — providing transparency for both the provider and consumer of the service.

Fig 1. : NIST’s Visual model of cloud computing definition

Cloud Service (Delivery) Models

Cloud service delivery is divided among three typical models and various derivative combinations. The three fundamental classifications are often referred to as the “SPI Model,” where ‘SPI’ refers to Software, Platform or Infrastructure (as a Service), respectively — defined thus:

  • Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user specific application configuration settings.
  • Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
  • Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Cloud Deployment Models

Regardless of the service model utilized (SaaS, PaaS, or IaaS) there are four deployment models for cloud services, with derivative variations that address specific requirements:

  • Public Cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
  • Private Cloud. The cloud infrastructure is operated solely for a single organization. It may be managed by the organization or a third party, and may exist on-premises or off-premises.
  • Community Cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, or compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.
  • Hybrid Cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).

Fig 2: Deployment Models

Cloud Computing Security Risks

Understanding the relationships and dependencies between Cloud Computing models is critical to understanding Cloud Computing security risks.

Cloud Reference Model 

Understanding the relationships and dependencies between Cloud Computing models is critical to understanding Cloud Computing security risks. IaaS is the foundation of all cloud services, with PaaS building upon IaaS, and SaaS in turn building upon PaaS as described in the Cloud Reference Model diagram. In this way, just as capabilities are inherited, so are information security issues and risk. It is important to note that commercial cloud providers may not neatly fit into the layered service models. Nevertheless, the reference model is important for relating real-world services to an architectural framework and understanding the resources and services requiring security analysis.

 Fig 3 : Cloud Reference Model (Cloud Security Alliance)

IaaS includes the entire infrastructure resource stack from the facilities to the hardware platforms that reside in them. It incorporates the capability to abstract resources (or not), as well as deliver physical and logical connectivity to those resources. Ultimately, IaaS provides a set of APIs which allow management and other forms of interaction with the infrastructure by consumers.

PaaS sits atop IaaS and adds an additional layer of integration with application development frameworks; middleware capabilities; and functions such as database, messaging, and queuing; which allow developers to build applications upon to the platform; and whose programming languages and tools are supported by the stack.

SaaS in turn is built upon the underlying IaaS and PaaS stacks; and provides a self-contained operating environment used to deliver the entire user experience including the content, its presentation, the application(s), and management capabilities.

The key takeaway of the model here is that the lower down the stack the cloud service provider stops, the more security capabilities and management consumers are responsible for implementing and managing themselves.

Mapping the Cloud model to Security Control & Compliance Model

Per the requirements of any regulatory or other compliance mandates, it becomes much easier to determine what needs to be done in order to feed back into a risk assessment framework; this, in turn, helps to determine how the gaps and ultimately risk should be addressed: accepted, transferred, or mitigated.

 Fig 4: Mapping the Cloud model to Security Control & Compliance Model


One needs to classify a cloud service against the cloud architecture model. Then it is possible to map its security architecture; as well as business, regulatory, and other compliance requirements; against it as a gap-analysis exercise. The result determines the general “security” and “risk” posture of a service and how it relates to an asset’s assurance and protection requirements.

 Other Factors determining security risks:

Beyond the cloud architecture, the following 12 domains also comprise the security “pain points” of within a cloud environment and that contribute towards the risk analysis of the same.

The domains are divided into two broad categories: governance and operations. The governance domains are broad and address strategic and policy issues within a cloud computing environment, while the operational domains focus on more tactical security concerns and implementation within the architecture.

Governance Domains:

  1. Governance and Enterprise Risk Management
  2. Legal and Electronic Discovery
  3. Compliance and Audit
  4. Information Lifecycle Management
  5. Portability and Interoperability

Operational Domains:

  1. Traditional Security, Business Continuity and Disaster Recovery
  2. Data Center Operations
  3. Incident Response, Notification and Remediation
  4. Application Security
  5. Encryption and Key Management
  6. Identity and Access Management
  7. Virtualization

Security Testing strategy in Cloud Computing Environments

Security requirements are taken as a foundation to derive tests to be executed against a system under test. Yet, these positive requirements by far do not cover all the relevant security aspects to be considered. Hence, especially in the event of security testing, negative requirements, derived from risk analysis, are vital to be incorporated. Due to a cloud’s openness, in theory there exists an infinite number of tests. Hence, a concise technique to incorporate the results of risk analysis in security testing is inevitable. Therefore a new model–driven methodology is proposed for the security testing of cloud environments, ingesting misuse cases, defined by negative requirements derived from risk analysis.

This is a novel model–driven approach for the risk–based security testing of cloud systems to improve cloud environment security. Based on the resulting risk model of a risk analysis (most of which discussed above), performed on the Cloud Under Test (CUT), negative requirements are derived. The negative requirements are further on employed to describe misuse cases, depicting test cases for the CUT.

The system model depicts the CUT in terms of interfaces (available services), classes (application data types) and operations (service implementations). This comes due to the fact, that in a cloud computing environment applications (SaaS), the infrastructure (IaaS) and available platforms

(PaaS) are abstracted as services, invocable or, in the case of IaaS and PaaS, accessible from outside. However, only incorporating interface (service) definitions during risk analysis by far does not suffice, their implementing classes and operations also have to be investigated as they may contain design flaws leading to security risks too.

Fig 5: Risk based security testing for cloud environments

In the test model and its sub–models (see Figure 5), viz. the Risk Model (RM), the Negative Requirements Model (NRM) and the Misuse Case Model (MCM) are intended to be generated automatically via Model–2–Model (M2M) transformations.

Deriving the negative requirements from the identified risks is a straight–out task.  A M2M transformation can be employed to   from the RM into the NRM. The latter contains the negative requirements, formulated out of the identified risks, described in another custom–tailored DSL. The negative requirements themselves more or less are textual descriptions of the malicious activities further on depicted in the MCM. Referring to the above example, a derived negative requirement describes a possible attempt of a malicious user to plant defective code inside the cloud. A reason the NRM is introduced as an intermediate step, and not directly generate the MCM out of the RM, is justified by the circumstance that negative requirements need to be refined, due to possible inconsistencies, introduced during the transformation process.

UML activities efficiently can be incorporated to generate executable test code. Template–based Model–2–Text code generator can be adjusted to meet the new requirements to be able produce proper test code out of the MCM.