Insecurities in API

Insecurities in API

Analysis

            Many IoT devices have very constrained resources and operate within hostile heterogenous environments. This scenario results in more critical and delicate security implement considerations.

Power Consumption

            Considering that many IoT devices are often battery operated or low powered, power consumption is very important when considering any security implementation. API security attacks could result in power drainage or not enough power to handle security implementations well. Bandyopadhyay and Bhattacharyya (2013) found that CoAP was more performant that MQTT in energy consumption and basic transfer over the UDP protocol. (Bandyopadhyay & Bhattacharyya, 2013).  [fig] shows a comparison of packets send using both MQTT and CoAP illustrating energy consumption is less using CoAP. It should also be noted that unlike CoAP that closes connection after each request/response, MQTT and AMQP both leave channels open listening for messages from a centralize broker (Dizdarević, Carpio, Jukan, Masip-Bruin, 2019). This results in more power usage. Dizdarević, et al. notes that because of the relatively high power, processing, and memory consumption, AMQP is best suited for resources with higher processing power.

Computational expense

            Securing APIs typically require some computation in the form of cryptography, hashing, and parsing. Of the security methods studied, it was found that CoAP over DTLS required more computation resources to implement. This is due to the CoAP using a specialized version of TLS which requires more computation in wrapping and unwrapping the UDP with the TLS security layers. This results in packet sizes being larger than the MQTT alternative which implicitly operates on UDP over TCP.

Memory requirements

            CoAP is considered a lightweight protocol, as such, headers, methods and status codes are all binary encoded. This compression reduces the overall size of package which, in turn, uses less memory (Dizdarević, et al., 2019). CoAP also uses the less memory intensive UDP protocol. However, as noted above, securing UDP requires a special implementation of TLS which increases the memory footprint of CoAP. 

Ease of use

            MQTT and AMQP both use a subscriber/publisher model with a central broker directing message. This means that devices subscribe are always open and listening to messages. Device messages need to be passed with username/password along with the message. This method is not a familiar as the tradition request/response model used by HTTP and, notably, by CoAP. This makes using GET, POST and other HTTP request methods easier to implement using CoAP. DDS is language and operating system independent. This allows for easier implementation among a wider range of developers. DDS also has supports automatic discovery allowing for devices to subscribe and publish messages without the need for a broker (Foster, 2015).

User Security

            MQTT is a subscriber/publisher protocol that employs a default plain text username/password security structure. This means that security is built in. This also means that it is easy to view the username and password for messages as they are passed across the network. For this, MQTT relies on TLS for securing transporting messages. This also means that security is more limited for MQTT. On the other hand, AMQP uses SASL mechanism that hands of security to manufacturers or companies. This results in far more flexibility in securing APIs. CoAP uses a request/reply method similar to the traditional HTTP model. This means that most HTTP method of security become applicable to CoAP, including OAuth, JWT, PSK, and RawPublicKey.

Quality of Service (QoS)

            Quality of services is important to ensure data delivery. MQTT protocol supports TCP and provides three different types of reliability mechanism for ensuring packets delivery in the form of Quality of Service (QoS) (Dinculeană & Cheng, 2019). CoAP only offers two levels of QoS. CoAP uses UDP as its transport layer which is known for speed but less reliability in packet delivery. TCP, on the other hand, guarantees message delivery but is generally slower. DDS allows for multiple levels of QoS which are fine-tuned by the user in prioritizing data exchanges (Foster, 2015).

System security

            MQTT relies on a broker for devices on the network to communicate. In a similar manner, CoAP relies on a centralized server through which requests and responses are communicated. This results in a single point of failure that could potentially compromise an IoT network. Although DDS uses a subscriber/publisher model, it does not rely on a broker. This decreases the probability for system failure because no single point of system of failure exists (Dizdarević, et al., 2019). By this measure, DDS decouples devices which allows for better overall network security.

Analysis of Protocols

SolutionsCriterion 1Criterion 2Criterion 3
Criterion Description
 
SolutionsCriterion 1 scoresCriterion 2 scoresCriterion 3 scores
Solution A
Solution B
Solution C

Table 1: an example of the UnWeighted Decision Matrix (Ramdhani, S., & Jamari, 2018).

Table 1 above shows an example of an unweighted decision matrix which can be used to identify some of the best API solutions. The typical unweighted decision matrix is used to evaluate three categories of solutions while using a three pronged criteria. Nevertheless, the matrix can be expanded to include more options and criteria, especially through the additional of supplementary columns and rows. In this regard, the matrix is considerably suitable for evolving solutions that have changing demands and options. For each criterion, a solution allocated a score ranging from 1-5, where 1 is the lowest and 5 the highest score. After each solution is scored, it is necessary to calculate the aggregate score to enhance the necessary evaluation. The aggregate scores for each alternative can be calculated using following

 as formulated in a 2018 article by Ramdhani and Jamari.

Where: is the aggregate score for each solution, n: the number of solution criterion,: refers to the total weight of the cth criterion: and  is the specific score of xth solution in the cth criterion.  Typically, the aggregate score determines the most favorable solution.

SolutionsCriterion 1Criterion 2Criterion 3Criterion 4Criterion 5Criterion 6Criterion 7
Criterion (cn) DescriptionPower ConsumptionComputational ExpenseMemory expenseEase of use   User SecurityQoSSystem security  
 
Solutions (xn)Criterion 1 Scores ()Criterion 2 Scores ()Criterion 3 Scores ()Criterion 4 Scores ()Criterion 5 Scores ()Criterion 6 Scores ()Criterion 7 Scores ()
MQTT4442242
AMQP2223322
CoAP4344442
DDS5555555
15141514141511

Table 2: Unweighted decision matrix for API communication protocols

While using figures from the unweighted decision matrix illustrated in Table 2 above, the calculation of aggregate score    for different API communication protocols is shown below:

MQTT (x1)

15*4) +14*4)+(15*4+(14*2)+(14*2)+(15*4)+(11*2)= 314

AMQP (x2)

15*2) +14*2)+(15*2)+(14*3)+(14*3+(15*2)+(11*2)= 224

CoAP (x3)

15*4) +14*3)+(15*4)+(14*4)+(14*4)+(15*4)+(11*2)= 356

DDS (x4)

15*5) +14*5)+(15*5)+(14*5)+(14*5)+(15*5)+(11*5)= 490

From the analysis alternatives, it is clear that DDS is the communication protocol for IoT designs. The alternative outdoes CoAP, which is the second best alternative with almost 150 points. AMQP and MQTT may be because the techniques possess significant scalability and security issues score the lowest aggregate points. In particular, AMQP has the lowest aggregate score of 224, which is less than half of that of DDS. Notably, CoAP and MQTT record almost the same aggregate scores. In this regard, while choosing between the two protocols, it is advisable to expand the criteria, hence acquire substantial insights about their performance.

Solutions

Untrusted Certificate Attacks

  • Emergency Change of the certificate authorities That Issues Company’s Certifications

The certificates that a company has always used may be compromised or may not be trusted for any other reason. In this case, the company must be able to refer quickly to other certificate authorities to issue new and reliable certificates. Otherwise, the untrusted certificates and in fact, the company is considered untrusted. Users’ access to the website is reduced, and the company suffers enormous losses. Therefore, the company must be ready to shift digital keys and certificates immediately to other certificate authorities. However, few companies use an automation system to shift keys and certificates when needed. In light of NIST’s recommendation, “Preparing for Responding to Certification Authority Compromise and Fraudulent Certificate Issuances” is very important. So, companies need to know what certificates they use, where these certificates are purchased, and what other certificate authorities they switch to when needed.

When certificate authorities become untrusted, each company should remove that certificate authorities from its list of certificate exporters and replace other certificate authorities quickly. Devices of any company should be able to trust other certificate authorities and accept and install new certificates (Peters, 2015).

To avoid the untrusted certificate attacks all certificates should be validated and signed by a trusted Certificate Authority. This authority can be a reputable third-party Certificate authority or using public key infrastructure that exists within any related organization. “The latter would require the Certificate Authority root certificate to be installed on all connecting clients, which can be done through the modification of Windows Group Policy:

  • Open the relevant Group Policy Object (GPO)
  • In the console tree, click Trusted Root Certification Authorities: Policy Object Name/Computer Configuration/Windows Settings/Security Settings/Public Key Policies/Trusted Root Certification Authorities
  • On the Action menu, point to All Tasks, then click Import to add the root certificate”.

Expired Certificates Attacks

  • New Certificates are Agile Certificates


In the technology world, certificates expiration has a good that it makes certificates modern. In the past, it was possible to obtain certificates for 5 years or more. Today there is a 3-year limit for issuing certificates, and the world of industry is looking to limit this time to even less than 3 years. Shorter credentials make it easier to update security standards. “Last year the entire internet migrated to the new SHA-2 signature algorithm. It was a bit bumpy because of those old 5+ year certificates out there”.  

Figure 4 SSL Validity Certificate Periods

From a policy perspective, long validity periods are not applicable. Since a new policy is issued, companies have to wait until their certification period ends, which can take up to 5 years. Once the validity periods are over, and certificates are upgraded and updated, it is possible to comply with the new policy. Uneven periods make it’s challenging to establish contingency between companies due to the lack of coordination in responding to the new policy. In many cases, users are forced to update their documents in the middle of the course.

“The CA/B Forum is an industry organization which sets best practices and standards for SSL certificate issuance. They are the ones ensuring that new certificates don’t continue to use old security measures. One of the goals the CA/B Forum has been working towards is shorter certificate validity. When SSL certificates expire more frequently, it makes it easier to improve security practices”.       
When updating company’s SSL certificates, it must be ensured that all information in certificates correctly entered. Customers should be assured that the company is the rightful owner of certificates and that the certificates’ security measures are always up to date (Lynch, 2016). To avoid expired certificates attacks, certificates should be provided and maintained periodically. Ensure that the certificate re-validation process begins at least thirty days before the expiration date.

Weak Hashing Algorithm Attacks

  • Security of Cryptographic Hash Functions (CHFs)

Three features of CHFs should be considered to meet the basic requirements for establishing a secure network. If these three features are taken into account, CHFs can be secured and applicable in many areas: pre-image resistance, collision resistance, and second pre-image resistance. Furthermore, other features should be considered when applying CHFs for different applications. 

“pre-image resistance: CHFs are considered to be computationally non-invertible which means, if a hash code H(M) is generated for a message M, it is considered to be computationally infeasible for an adversary (A) to retrieve the original message (M) back” (Kishore & Kappor, 2016, p2).

Figure 5 pre-image resistance

“collision resistance: For a CHF to be weak collision resistant or second pre- image resistant, it should be computationally infeasible for an adversary (A) to find two different messages M and M” which can generate same hash values from that CHF. That is, to find M, M”; such that H(M) = H(M”) but M ≠ M” (Kishore & Kappor, 2016, p3).

Figure 6 Collision Resistance

“second pre-image resistance: For a CHF to be strong collision resistant, it should be computationally infeasible for an adversary (A) with given CHF H and message M to find another message M” where M≠M” and H(M) = H(M”)” (Kishore & Kappor, 2016, p3).

Figure 7 Second Pre-image Resistance

It is recommended to issue stronger hashing algorithms such as the SHA-2 family of hashing algorithms (SHA-224, SHA-256, SHA-384, and SHA-512). Weaker hashing algorithms, such as SHA-1 and MD5, should be replaced to make the attack less likely (Perspective Risk, 2019).

The NIST has published a family of cryptographic hash functions, the Secure Hash Algorithm, as a U.S. Federal Information Processing Standard (FIPS) that significantly guarantees the security of hash functions. This family of hash functions includes several algorithms that have been developed in recent years to ensure robust security requirements. Many of these functions have been combined with two components: a compression function and a domain extender.

 In recent years, attack algorithms have become more efficient and inclusive. The new algorithms apply a kind of Hashing construction that works in multidimensional and multilevel systems. This view is based on the creation of a dependent chain of fast and sophisticated algorithms to prevent attacks from breaking (Kishore & Kappor, 2016).

Analysis of Alternatives

SolutionsCriterion 1Criterion 2Criterion 3
Criterion DescriptionTechnology maturityIntegration riskManufacturing feasibility
Emergency Change of the certificate authorities 324
Agile Certificates455
Security of Cryptographic Hash Functions (CHFs)555

Table 1Critical Technology Elements (CTEs)Rating

The solutions to the problem of weaknesses of TLS that leads to insecurity in APIs were scored according to the defined criteria. The results are shown in the table no.1.

Technology Maturity: A technology is considered mature that has been in use for a long time so that the inherent flaws and problems of the technology are well known. The shortcomings of this technology have been fixed or minimized in advanced versions. Changing of the certificate authority’s capability in this area seems fairly poor as the mechanism of changing of the certificate authorities is still in its infancy. Companies active in this field are expanding. A more extensive advertising system is needed to inform all relevant companies.

Agile Certificate is increasing. Most companies try to familiarize employees with the basics of training of Agile Certificate. So, it has done better than emergency change of certificate authorities in the area of technology maturity. Security of Cryptographic Hash Functions (CHFs) has made fundamental changes in all areas of network security. Along with the rapid development and resources utilization, it has also helped to provide a robust security.

Integration Risk: Integration risk demonstrates the ability to integrate information technology into departments and organizations. Poor integration between the two technologies leads to problems in data integration and consequently functional failures in the process and operating systems. Emergency Change of the certificate authorities has not yet reached a stage that has been thoroughly discussed among the various departments. It seems that this solution still needs more time to be implemented seamlessly and comprehensively among the companies involved. So, it gets a lower score on this criterion.

 “Agile certification increases adaptability in Agile technologies that will increase the team productivity and enhance the customer satisfaction. Agile certificate helps companies to learn practical techniques for planning, estimating the cost of the project in an Agile way” (Greycampus.com, 2019). So, this solution will help increase the exchange of information between different departments and increase security.

“One main use of hashing is to compare two files for equality. Without opening two document files to compare them word-for-word, the calculated hash values of these files will allow the owner to know immediately if they are different. Hashing is also used to verify the integrity of a file after it has been transferred from one place to another, typically in a file backup program like SyncBack. to ensure the transferred file is not corrupted, a user can compare the hash value of both files. If they are the same, then the transferred file is an identical copy” (Chung, 2019). For the purpose of security, this solution is considered to promote efficient integration.

Manufacturing Feasibility: This analysis determines three components: Is technology feasible? Is this feasibility in line with the projected costs? Is technology profitable? (Jain & Kuthe, 2013). A quick change to a new certificate authority usually brings costs. Cutting back on time before the deadline and switching to a new certificate authority during an emergency is often accompanied by unforeseen expenses. However, switching can be very cost-effective over time. So, this solution is relatively feasible.

Agile Certificates is a feasible solution because research shows that any short-term and long-term education investment is profitable for companies. Often the cost of training staff and managers, especially in the field of security, is far lower than the unforeseen expenses of lack of knowledge about security in APIs.  

Security of Cryptographic Hash Functions (CHFs) is feasible because it does not require high costs for creation and application. The process of deploying security networks employs personnel who specialize in decrypting relevant files and can detect attacks. Investing to hire a team of professionals and create optimal conditions for them over time reduces the costs of attacking the security.

SolutionsCriterion 1Criterion 2Criterion 3
Criterion DescriptionTechnology maturityIntegration riskManufacturing feasibility
Emergency Change of the certificate authorities 324
Agile Certificates455
Security of Cryptographic Hash Functions (CHFs)555

Table 3: the score of three API solutions on the unweighted decision matrix

While using score aggregating formula for unweighted decision matrix, which is given by ; ,  ; in this regard, the most essential criterion of evaluating API solutions is manufacturing feasibility. Nevertheless, scores in other criteria are still relevant in the process of identifying the best solution.in this regard, it necessary to calculate the aggregate scores for each solution. For Emergency Change of the certificate authorities solution,, , in this regard the aggregate score for this option  is given by (12*3)+(12*2)+(14*4)= 116. For Agile Certificates,,

 (12*4)+(12*5)+(14*5)=178; and finally for Security of Cryptographic Hash Functions (CHFs) ,, , hence the aggregate solution can be established by (12*5)+(12*5)+(14*5)=190. Therefore, the unweighted decision making matrix indicates that, when compared to Agile certificates and Emergency Change of the certificate authorities, Security of Cryptographic Hash Functions provide the best API solution security.

References

Arxiv.org. (2019). A Review of Man-in-the-Middle Attacks. [online] Available at: https://arxiv.org/pdf/1504.02115.pdf [Accessed 20 Nov. 2019].

Beal, V. (2019). What is API security?. [online] Redhat.com. Available at: https://www.redhat.com/en/topics/security/api-security [Accessed 21 Oct. 2019].

Bischoff, P. (2019). UDP vs TCP: What are they and how do they differ? | Comparitech. [online] Comparitech. Available at: https://www.comparitech.com/blog/vpn-privacy/udp-vs-tcp-ip/ [Accessed 20 Oct. 2019].

Brown, K. (2013). SANS Institute: Reading Room – Authentication. [online] Sans.org. Available at: https://www.sans.org/reading-room/whitepapers/authentication/dangers-weak-hashes-34412 [Accessed 22 Nov. 2019].

Chung, C. (2019). What is Hashing? Benefits, types and more. [online] 2brightsparks.com. Available at: https://www.2brightsparks.com/resources/articles/introduction-to-hashing-and-its-uses.html [Accessed 1 Dec. 2019].

Comodosslstore.com. (2019). Hazards of an Expired SSL certificate. [online] Available at: https://comodosslstore.com/blog/hazards-of-an-expired-ssl-certificate.html [Accessed 22 Nov. 2019].

Duncan, R. (2013). Netcraft | SSL: Intercepted today, decrypted tomorrow. [online] News.netcraft.com. Available at: https://news.netcraft.com/archives/2013/06/25/ssl-intercepted-today-decrypted-tomorrow.html [Accessed 21 Oct. 2019].

Greycampus.com. (2019). Greycampus. [online] Available at: https://www.greycampus.com/opencampus/agile-certified-practitioner/benefits-for-pmi-acp-certified-professional [Accessed 1 Dec. 2019].

Jain, P., & Kuthe, A. M. (2013). Feasibility study of manufacturing using rapid prototyping: FDM approach. Procedia Engineering63, 4-11.

Kishore, N., & Kapoor, B. (2016). Attacks on and advances in secure hash algorithms. IAENG International Journal of Computer Science43(3), 326-335.

Lynch, V. (2016). Why Do SSL Certificates Expire?. [online] Hashed Out by The SSL Store™. Available at: https://www.thesslstore.com/blog/ssl-certificates-expire/ [Accessed 23 Nov. 2019].

Perspective Risk. (2019). Reducing Your Risks: SSL and TLS Certificate Weaknesses. [online] Available at: https://www.perspectiverisk.com/multiple-ssl-tls-certificate-weaknesses/ [Accessed 6 Nov. 2019].

Peters, S. (2015). What You Should, But Don’t, Do About Untrusted Certs, CAs. [online] Dark Reading. Available at: https://www.darkreading.com/cloud/what-you-should-but-dont-do-about-untrusted-certs-cas-/d/d-id/1322111 [Accessed 23 Nov. 2019].

Slivker, A. (2019). What you really need to know about securing APIs with mutual certificates. [online] Nevatech.com. Available at: https://www.nevatech.com/blog/post/What-you-need-to-know-about-securing-APIs-with-mutual-certificates [Accessed 21 Oct. 2019].

Techopedia.com. (2019). What is Transport Layer Security (TLS)? – Definition from Techopedia. [online] Available at: https://www.techopedia.com/definition/4143/transport-layer-security-tls [Accessed 21 Oct. 2019].

Ramdhani, S., & Jamari, J. (2018). The Modeling Of A Conceptual Engineering Design System Using The Decision-Matrix Logic. In MATEC Web of Conferences (Vol. 159, p. 02022). EDP Sciences. References

Information Systems and Cloud Computing

Information Systems and Cloud Computing

Cloud computing services come with new abilities for hosting and providing complex collaborative operations in an organization. With a wide range of applications, including backup, communication and big data analytics, the technology has revolutionized various areas of business management. Furthermore, as the technology continues to evolve, cloud computing consulting services are increasing as well. For instance, the limitless storage capacity and the huge computing power of the cloud have significantly increased the amount of user data that organizations can store (Zhou, Xu & Kimmons, 2015). In particular, it is possible to create applications, such as chatbots and engage in extremely personal communications with regard to user behavior and preferences. Moreover, some of the cloud-based applications, including chatbots, can optimally use the cloud’s computing power to deliver contextual and customized customer experience. Nevertheless, these developments may come with undesirable side effects, including additional vulnerabilities and threats emanating from collaboration and exchange of data over the internet (Crane, Matten, Glozer & Spence, 2019). In this regard, users are more concerned on the privacy and security elements of the cloud computing. Moreover, the emergence of new technology usually possesses a ripple impact, hence introducing fresh ethical, political and social problems that must be addressed on the individual, political and social levels. Therefore, implementation of cloud computing systems must be accompanied with the deployment of the necessary strategies to reduce a plethora of side-effects.

Background

Cloud computing involves the delivery of hosted services, which organizations can access on demand over internet to reduce operational costs and investment on Information technology. In particular, there are three broad cloud computing service categories: SaaS (Software as a Service), IaaS (Infrastructure as a Service) and PaaS (Platform as a Service). Furthermore, cloud computing services have three unique characteristics that distinguish them from conventional web hosting. Furthermore, a cloud service is sold on demand, usually, by the hour or the minute; is elastic to enable users to have as little or as much of it as they may need at any specific time; and is entirely managed by a third party provider to allow the management of an organization to focus on core operations and processes (Gretzel, Werthner, Koo & Lamsfus, 2015).

Benefits and Opportunities

Big and small, emerging and established organizations across the world are increasingly embracing cloud computing to significantly improve competitive advantage in the contemporary global business environment, with tremendously escalating competition. In the past few years, global organizations have considerably increased their investment in information systems. For instance, in 2015, global corporations and governments spent about 3.4 trillion euros in hardware, software and telecommunication equipment and an additional 544 million euros in management consulting and services, including redesigning business operations to exploit the benefits of emerging technologies. An increase in investment clear indicates the tremendous value of information systems to the contemporary organization. Some of the reasons that are making organizations to increase their investment on information system include operational excellence, development of new products, services and business models, enhancing supplier and customer intimacy, intensify decision making, and ensure survival in a highly competitive environment. Because of this wide range of application areas, organization are considerably benefiting from information systems (Rothaermel, 2013). In particular, both established and non-established organizations in tourism can considerably benefit from cloud computing. Previous studies, especially the AIDA theory, indicates that a significant percentage, 77% in the awareness and interest, 65% in the desire, and 34 % percent in the action (Booking and paying) stages of the consumption journey map, of tourism consumers rely on the internet, especially to acquire relevant information and perform the necessary actions (Pitana & Putu-Diah, 2016). Moreover, the percentage of customers relying on the internet at the action stage is increasing at a considerably high rate (244%). Thus, organizations are strategically positioning themselves, especially by adopting relevant information technology and techniques to take advantage of the increased dependency on the internet in the tourism industry.

Established organizations considerably rely on information systems to improve the overall organizational performance, especially by enhancing operational efficiency, enabling the development of new services, products and business frameworks, among other purposes. Digital organizations that entirely conduct their businesses on the internet, hence considerably rely on information systems are increasing at a considerably high rate. Consequently, modern information technology is playing an essential role in the performance of the contemporary organization, especially in the tourism industry, where the consumer dependence on the internet is growing at a remarkable rate (Bidoki & Kargar, 2016). For instance, established organizations are relying on cloud hosted services to ensure that they can focus on the core managerial services without neglecting the ever increasing digital market. Nevertheless, small and medium organizations cannot afford the cost of implementing in house IT solutions to appropriately benefit from the information systems. For instance, SMEs without sufficient financial resources cannot afford expensive data analytics to effectively analyze Big Data (Moghaddam, Ahmadi, Sarvari, Eslami & Golkar, 2015). In this regard, cloud computing; which significantly reduces the cost of IT solutions, by providing hosted services over the internet, can considerably assist small and medium enterprises, which cannot afford implementing in-house IT solutions, to take advantage of the current tremendous growth in the digital market (Buhalis & Amaranggana, 2013). With cloud computing, which offers shared resources on demand basis over the internet, SMEs do not have to purchase expensive IT equipment and implement in house solutions, including sophisticated software that can be considerably overwhelming to implement and manage, especially for the yet to be established organizations (Vajjhala & Ramollari, 2016). However, like any other new technology, cloud computing possesses a plethora of ethical, social and political issues that can significantly affect the performance of the modern enterprises. For instance, because cloud computing involves the transfer of information and information system management to a third party, an organization subscribing to cloud hosted services can no longer guarantee privacy and security to consumer data (Chou, 2015). In sensitive industries, such as tourism, which involve sharing of substantial personal information, such as names, location and financial details, a compromise of privacy can significantly impact the performance of an organization. Therefore, while cloud computing possess substantial benefits, organization must identify and implement the necessary information security risks mitigating strategies.

Costs and Threats

Despite having many benefits and opportunities to both established and non-established organizations, cloud computing includes several overarching costs and threats, which significantly vary with regard to the category of a business and industry. In general, communication over the internet involves the transfer of information through a series of computers, which significantly increases the risk of security compromise. In particular, each computer in the series of communication represents a node within which information security can be compromised. In the tourism decision making journey map, especially in the action stage, consumers share immense amounts of personal information, which is significantly valuable to bad actors. Moreover, in the current digital age, malicious actors are not only growing in numbers, but in the techniques that they can use to compromise information systems, hence gain access to considerably sensitive data (Jovicic, 2019). For instance, the considerable advancement in machine learning techniques in the recent has significantly increased the capabilities of attackers and hackers, because the sophisticated technology to listen and intercept to communications over the internet (Morrow, 2019). In this regard, an organization must establish and hire trustworthy cloud services providers and implement effective information security techniques, such as the block chain, and encryption to benefit from cloud computing (Bidoki & Kargar, 2016). Nevertheless, trustworthy cloud services providers and effective information security technology can be considerably expensive, especially to non-established organizations. Thus, while cloud computing can considerably improve the performance of enterprises, it may still be inaccessible or extremely risky to small and medium enterprises in the tourism industry.

Conclusions

Cloud computing has significant potential to improve organizational performance, especially in the tourism industry, which has become considerably competitive. In general, many organizations across the world rely on the use of information systems to improve performance, especially through the development of new products, services, and business framework, attain operational excellence, enhance decision making, and improve customer and supplier intimacy, among others. Consequently, the amount of global investment on information system has significantly increased in the recent past. Nevertheless, the cost of information systems and infrastructure can be considerably expensive, especially to the small and medium enterprises. In this regard, cloud computing; which offers hosted services, on demand basis, hence considerably reducing the cost of services, has become considerably relevant to both established and non-established organizations. With cloud computing, big and small enterprises can considerably reduce operational costs, and improve the output of processes, because the technology involves the transfer of information system and infrastructure management responsibilities to the service provider. Without the obligation of managing information systems, management can focus on the core business operations and processes, hence improving the efficiency of an organization. Nevertheless, transferring information and information system management responsibilities to third parties includes a paradigm shift, which considerably affects consumer visibility and control of information. Moreover, involving third parties in the management of consumer information can considerably compromise system security, especially in the tourism industry, which involves sharing of sensitive information. Thus, enterprises in the tourism industry must identify reliable services providers and implement reliable risk mitigation techniques, such as the block chain and encryption to reduce the threat, hence benefit from cloud computing.

References

Bidoki, M. Z., & Kargar, M. J. (2016). A social cloud computing: Employing a Bee Colony Algorithm for sharing and allocating tourism resources. Modern Applied Science10(5), 177-185.

Pitana, I. G., & Putu-Diah, S. P. (2016, September). Digital marketing in tourism: the more global, the more personal. In International Tourism Conference: Promoting Cultural and Heritage Tourism. Udayana University, Bali (pp. 1-3).

Buhalis, D., & Amaranggana, A. (2013). Smart tourism destinations. In Information and communication technologies in tourism 2014 (pp. 553-564). Springer, Cham.

Chou, D. C. (2015). Cloud computing: A value creation model. Computer Standards & Interfaces38, 72-77.

Crane, A., Matten, D., Glozer, S., & Spence, L. (2019). Business ethics: Managing corporate citizenship and sustainability in the age of globalization. Oxford University Press.

Gretzel, U., Werthner, H., Koo, C., & Lamsfus, C. (2015). Conceptual foundations for understanding smart tourism ecosystems. Computers in Human Behavior50, 558-563.

Gretzel, U., Zhong, L., & Koo, C. (2016). Application of smart tourism to cities. International Journal of Tourism Cities2(2).

Jovicic, D. Z. (2019). From the traditional understanding of tourism destination to the smart tourism destination. Current Issues in Tourism22(3), 276-282.

Moghaddam, F. F., Ahmadi, M., Sarvari, S., Eslami, M., & Golkar, A. (2015, May). Cloud computing challenges and opportunities: A survey. In 2015 1st International Conference on Telematics and Future Generation Networks (TAFGEN)(pp. 34-38). IEEE.

Morrow, T. (2019). Risks, Threats, & Vulnerabilities in Moving to the Cloud.

Rothaermel, F. T. (2013). Strategic management: concepts. New York, NY: McGraw-Hill Irwin.

Vajjhala, N. R., & Ramollari, E. (2016). Big data using cloud computing-opportunities for small and medium-sized enterprises. European Journal of Economics and Business Studies2(1), 129-137.

Zhou, X., Xu, C., & Kimmons, B. (2015). Detecting tourism destinations using scalable geospatial analysis based on cloud computing platform. Computers, Environment and Urban Systems54, 144-153.

INFORMATION SECURITY MANAGEMENT

INFORMATION SECURITY MANAGEMENT

IS Management and Utilization Approaches

Information Preservation

            Many categories of digital material chosen for long-term preservation may include sensitive and confidential information, which must be protected to avoid non-authorized user access. In a huge number of cases, information protection may be regulatory or legal obligations on the organization. Moreover, digital materials must be managed with regard to the organization’s IS policy to prevent security breaches. International Standards Organization (ISO) 27001, especially ISO, 2013a, outlines the way security procedures can be organized and monitored. Furthermore, ISO 27002, in ISO, 2013b, offers guidelines on the deployment of ISO 27001-compliant procedures. Organizations complying with such standards can be accredited and validated by the relevant external organization. In some instances, the organization’s IS security policy may affect digital material preservation activities as well, hence may require engaging the support of Information Communication and Technology (ICT) and information governance teams to enhance preservation processes, while using the appropriate methods.

            Some information security techniques, such as encryption, immensely increase the complexity of the preservation process and should, therefore, be avoided whenever possible for archival copies. In this regard, alternative security methods, such as robust user authentication demands for remote access, lock-down terminals access restriction in secure rooms or controlled locations, may require to be more rigorously deployed for sensitive unencrypted files. Nevertheless, alternative methods may not always be feasible or sufficient and files received on ingest, especially from a depositor, may include encryption, hence it is essential to possess adequate knowledge regarding IS options, including encryption, key management, and their impacts on digital preservation.

Encryption

            Encryption is a cryptographic method that protects digital material by changing it into a scrambled state. The technique may be deployed at numerous levels, ranging from a one file to the entire disk. There exist many encryption algorithms, each of which utilizes a unique technique of information scrambling. To unscramble and convert data back to its initial form, a key may be required. The size of the key immensely determines the strength of an encryption technique. For instance, a 256- bit encryption is more secure compared to a 128-bit encryption. In this regard, large key sizes are recommended for the encryption of supersensitive, hence high risk files, to significantly reduce chances of being compromised. Nevertheless, the technique is only effective when the encryption key is unknown to third parties, who may instigate immense information security breaches. For instance, when a user has logged into a an encrypted drive and left the computer powered on and unattended will offer third parties with a tremendous opportunity to access information held in the encrypted region, hence probably leading to its release.

            In a repository, encryption security techniques can considerably lose effectiveness with time, because of the existing intensive competition between their development and that of computational mechanisms to break them. In this regard, when deployed, all encryption by a repository require active management and timely update to uphold data security. Moreover, to enhance timely access of encrypted digital material in a repository, the organization must manage the encryption keys. Destruction or loss of the keys makes the encrypted data inaccessible to the organization. Therefore, while using the encryption technique of information, an organization must ensure appropriate key management, especially by conducting periodic updates, as well as user training, to reduce chances of unauthorized encrypted information access by third parties.

Access Control

            Access controls enable an administrator to stipulate authorized digital material access and the category of permitted access, which may include read only, write. The National Digital Stewardship Alliance (NDSA) recommends four levels at which access control can be used to support the preservation of digital materials. The organization primarily emphasizes understanding of authorized users, access rights and access restriction enforcement, while using the procedure highlighted below.

NDSA LevelActivity
 Establish users with read, write, move and delete access rights to individual rights Restrict user access authorization to specific files
 Document content access restrictions
 Maintain logs of user activity on files, including preservation and deletion actions
 Conduct audit logs

Redaction

            Redaction involves digital resource analysis to identify sensitive or confidential information, and replacing or removing it to minimize chances of unauthorized access. Some of the most common redaction techniques include the pseudonymisation and anonymization, which are utilized to extract personally identifiable information, as well as clean authorship details. With regard to datasets, redaction is usually performed by removing information, but retaining the record structure in the release version. Notably, redaction should be conducted on a copy the original rather than the original itself.

            Storage of a huge number of the digital materials developed using the office systems, including Microsoft Office, is performed in proprietary, binary-encoded formats, possessing significant information which is never displayed; hence its presence may not be apparent. Moreover, such materials may include audit trails, change histories or embedded metadata that can be utilized to recover deleted information, hence circumvent simple redaction processes. A combination of format conversion and information deletion can be used to perform redaction on digital material. Specific formats, including the ASCII text files, only possess displayable information. In this regard, conversion to such a format eliminates any information which may be hidden in non-displayable segments of the bit stream. 

Information Assurance Compliance with Government Regulations

Information Assurance Compliance with Government Regulations

            Many organizations still face a high number of Information Assurance (IA) policy compliance issues, which can be attributed to weak alignment between Information Security (IS), AI and corporate strategies. There is a wide range of factors, including organization size, competition, technology and government regulations, which immensely determine the level of compliance, hence the ability of an organization to circumvent information security incidences (Cannoy & Salam, 2010). For instance, the organization size immensely determines the ability of an organization to implement robust technology, such as Electronic Medical Record (EMR) systems, which can immensely improve compliance. However, because the level of influence varies from one factor to another, it is increasingly difficult to effectively address cyber-security concerns only using an information security strategy implemented in a single department Cherdantseva, Hilton & Rana, 2012). For instance, while previous studies have established that technology has less impact on cyber-security, single department oriented information security tends to largely focus on it, hence immensely compromising the ability of an organization to circumvent or reduce incidences (McFadzean, Ezingeard & Birchall, 2011). Therefore, organizations must consider merging IA and corporate strategies, by aligning the internal business environment to address external environment factors that immensely determine the propensity for information assurance compliance to improve protection from cybercriminals.

Propensity for IA Compliance

            Government regulation is an external business environment factor, which immensely determines the propensity for IA compliance of an organization. In most cases, the corporate strategy determines level of it compliance to policies, because organizations comply with policies that favor their interests. For instance, profit making organizations, whether deliberately or not, largely comply with customer oriented policies, while not-for-profit institutions mostly emphasize standard oriented regulation. According to Braun, Vance and Alexander (2015), for-profit healthcare institutions are likely to comply with privacy rules, hence improving their ability to win and maintain customers, because patients immensely value confidentiality. On the other hand, not-for-profit organizations, without increased focus on profitability, are likely to implement Health Insurance Portability and Accountability Act demands to improve standards. Therefore, each category of organization encounters a considerably high number of IA compliance issues, because of selective adoption of regulations rather than embracing a holistic approach to information security management.

            Organizational Compliance. The organizational propensity for information assurance policy compliance depends on both external and internal business environment associated factors. For instance, technology, which is an external business environment, has been found to immensely influence an organization’s IA compliance. Previous studies on information assurance policy compliance indicate that larger not-for-profit organizations, which are regarded as Information Technology (IT) leaders, with higher EMR system base, are more likely to comply with Health Insurance Portability and Accountability Act (HIPAA). Moreover, In the case of security rules, the larger not-for-profit hospitals, regarded as IT leaders, with a higher EMR base installed, are on average more likely to be compliant regardless of their academic status. In this regard, the size and category of the organization, which immensely determines the ability of an organization to acquire the necessary technology, play an immense compliance role as well. While the size determines the ability to procure supporting technology, organizational category dictates the willingness to spend on compliance. Compared to for-profit organizations, not-for-profit organizations do not necessarily consider the return on investment, but largely focus on factors, such as accountability in service delivery (Pascu, 2016). Therefore, larger not-for-profit organizations are likely to implement accountability enhancing technology compared to the for-profit institutions that largely focus on profitability.

            The set of customer demands is another external business environment factor, which immensely influence the compliance of an organization. For instance, in the healthcare sector, patients immensely value privacy, hence organizations intending to maintain, as well as win new customers, must emphasize increased information security. According to Braun, Vance and Alexander (2015), on average, larger for-profit health organizations and larger academic hospitals, whether considered IT leaders or not, possess a higher propensity to comply with the privacy rule. Nevertheless, compared to larger academic hospitals, larger for-profit hospitals are more likely to comply with privacy. Furthermore, On the other hand, for the transaction rule, larger for-profit hospitals, whether considered IT leaders or not, have a higher propensity to comply .While larger for-profit academic hospitals do not necessarily emphasize profitability, larger for-profit hospitals must ensure a return on investment; hence maintain an improved appeal to win as many customers as possible. Therefore, because healthcare customers immensely value privacy the large for-profit organizations must be compliant with the privacy rule.  

            Apart from the external business environment factors, the organizational compliance immensely depends on the elements internal to an institution, which include human resource, operational efficiency, infrastructure and capital. For instance, the availability of information management infrastructures, such as the EMR, which increasingly enhance the ability to align the corporate strategy with information assurance, can immensely determine the level of propensity for compliance. However, because most of the internal activities tremendously rely on human resource, organizational compliance immensely depends on individual employees. Thus, it is highly essential to evaluate, as well as emphasize individual employee compliance in an organization.

            Individual Compliance. There are internal factors, including security infrastructure and skills of relevant employees, which are linked to the external environment factors, such as systemic competences and IA governance that immensely influence information assurance compliance for an organization. Like the organizational compliance, individual information security behavior immensely varies with regard to the importance of compliance, awareness and work designs. The level of information security knowledge determines an individual’s propensity to information assurance compliance. For instance, in a healthcare organization, the radiology office possesses a deeper level of compliance compared to the orthopedic office. While the orthopedic department does not largely deal with immense patient information, the radiology office involves gathering patient information through image diagnosing. In this regard, the radiology office will most likely emphasize information compliance compared to the orthopedic department. Therefore, propensity for compliance immensely depends on the user perceived importance of information assurance, and immensely affects the entire depart, as well as the organization information security.  

            Organizational factors, including culture, policies, rules, powers and processes immensely influence individual information security behavior, hence IA compliance. Users are highly likely to violate information control, because of poorly designed policies, processes and rules or lack of adequate understanding or organizational power dimensions (Harell, 2014). For instance, system users with a reduced understanding about information security system credentials confidentiality requirements are likely to expose the organization to an increased risk of attack. Moreover, poorly designed security processes, information systems allowing users to set weak passwords or even create accounts without the need necessary credential immensely encourage the violation of compliance requirements. Therefore, the internal organizational environment must be appropriately designed to improve the propensity for compliance, by including the relevant information system security to reduce chances of authorized access.     

            Organizations without appropriate security readiness, including security expertise and infrastructure usually record reduced propensity for compliance. Security expertise involves improving employee compliance, especially by focusing on the certified security professionals’ level of knowledge, skill and expertise in information security management. However, Cherdantseva, Hilton, Rana and Ivins (2016), indicate that there is a need to involve the end users, as well as the information technology department in security management. Harrell (2014) argues that although the conventional risk management techniques have been efficient in addressing the security risk of a single organization, globalization, technology and business relationships still pose immense responsibility and roles pertaining to security to the employees. Thus, there is a need to establish specific employees’ roles, as well as constantly train managers on the importance of information security.

            Addressing Cyber Security Concerns. Although there exists a wide range of risk mitigation measures, there are several mistakes made by information security managers, which immensely expose organizations to cybercriminals. While cyber-security depends less on technology, there is an increased tendency of overreliance on technical tools, which are not the basis for a holistic and robust cyber-security strategy and policy. In this regard, information security managers that trust technology too much immensely expose the organization to risk of cyber-attacks. According to KPMG international the organization should implement robust information security systems, but information regarding the manner in which employees affect cyber-security is essential, because the weakest link in information security is the human factor both for IT users and professionals. In this regard, it is clear that information security management incorporates organizational functions beyond the information technology department (Cherdantseva & Hilton, 2012). Therefore, to effectively address information security concerns, firms must embrace an organization-wide framework that integrates the corporate, as well as information security strategy, while considering government regulations. .  

Proposed Information Security Framework

            The synergistic security model, shown in figure 1 below, integrates corporate strategy elements, such as work designs and structures, and human resource management to ensure a holistic approach to information security management. Based on the Leavitt’s diamond model of organizational management, the synergistic framework focuses on people, technology, structure and tasks to ensure positive information security behavior within an organization. The human factor is the weakest link in information security; hence organizations must focus on improving individual propensity for IA compliance to reduce the risk of cyber-attacks (Harell, 2014). Thus, the synergistic security model, which focuses on improving individual information security compliance behavior while considering the organizational structure and information security tasks and technology, can immensely assist organizations in complying with government IA regulations.

            Apart from implementing robust information security technology, organizations must ensure that users possess the necessary skills and knowledge to improve their performance. Failure to equip employees with the necessary knowledge and skills leads to increased non-compliance, because the implemented technology becomes an immense hindrance to their day-to-day activities performance. In this regard, technology users will most likely violate information security controls to ensure job tasks completion (Harell, 2014).Thus, an organization must conduct employee training to ensure that workers possess the necessary skills and knowledge regarding the implemented information systems and technology.

            Organizations must implement an information structure with defined tasks to enhance smooth flow of information within the organization, hence ensure that every stakeholder understands their compliance roles. Policies, rules and processes must be specifically defined to ensure that all employees understand the power dimensions of the organization. Moreover, the constructs of the work system designs must be harmonious to ensure that employees are assigned specific information security roles to either avoid contradictions or gaps that can increase chances of non-compliance. Inappropriate work designs can easily lead to unassigned roles, duties assigned without the knowledge of relevant employee or over-assigned or under-assigned workers hence increase the chances of non-compliance. Therefore, the work system must be appropriately designed to ensure that all information system roles are allocated to specific people with capacity to deliver on particular demands.

Figure 1: The synergistic security model:

 Source (Harell, 2014).

Conclusion

            Information system dependence is expanding at a very high rate, especially because of business competitiveness, but there still exist compelling information security concerns attributed to the growth. In this regard, information assurance is increasingly becoming an essential subject in information system management. Moreover, information compliance has become an area of increased interest for organizations, as well as relevant stakeholders in the industry, but there is still a wide range of factors, including employees, organization category and government regulations, which influence information assurance policy compliance. Previous studies are yet to establish a consensus of disciplines, inter-discipline relations, as well as scope and goals of information security and information assurance. However, the internal business environment plays an immense role in ensuring the overall organizational compliance. Specifically, individual information security compliance tremendously determines the organizational adherence to government regulations. In this regard, through appropriate work designs, training and communication, organizations can avoid the most common information security mistakes. For instance, appropriate training can improve the understanding of the government regulations, as well as equip employees with relevant knowledge and skills about the information security technology, hence improve compliance behavior. Therefore, organizations must align the internal business environment to address the external IA compliance requirements, especially by focusing on employee information security behavior.

References

Cannoy, S. D., & Salam, A. F. (2010). A framework for health care information assurance policy and compliance. Communications of the ACM53(3), 126-131.

Cherdantseva, Y., Hilton, J., & Rana, O. (2012, September). Towards SecureBPMN-Aligning BPMN with the information assurance and security domain. In International Workshop on Business Process Modeling Notation (pp. 107-115). Springer, Berlin, Heidelberg.

Cherdantseva, Y., Hilton, J., Rana, O., & Ivins, W. (2016). A multifaceted evaluation of the reference model of information assurance & security. Computers & Security63, 45-66.

Harrell, M. N. (2014). Factors impacting information security noncompliance when completing job tasks.

McFadzean, E., Ezingeard, J. N., & Birchall, D. (2011). Information assurance and corporate strategy: a Delphi study of choices, challenges, and developments for the future. Information Systems Management28(2), 102-129.

Braun, C. K., Vance, A., & Alexander, E. C. (2015). Managing Patient Confidentiality Issues in Rural American Hospitals: Dealing with HIPAA in Central Appalachia. Ethics & Critical Thinking Journal2015(2).

Pascu, A. (2016). Corporate compliance in health care: An overview of effective compliance programs at three not-for-profit hospitals (Doctoral dissertation, Utica College).

Incident Response Exercise & Report

Incident Response Exercise & Report

SIFERS-GRAYSON CYBERSECURITY INCIDENT REPORT FORM

  1. Contact Information for the Incident Reporter and Handler

Name:

Role: Information Security Analyst

Organizational unit: The Blue Team

Email address: name_secanst.ict@sifersgrayson.us  

Phone number: +1 606 645 234 123

Location: 1555 Pine Knob Trail, Pine Knob, KY 42721

  • Incident Details

Status change: The Incident started from April 18, 2019 at 0.800hrs GMT-4 and ended on April 19, 20th at 0.1000 hrs.

Physical location of the incident: Grayson City, Kentucky, USA

Current status of the incident: Complete  

Source/cause of the incident:The source of the incident was a malware attack on the drone AX10 (IP address 10. 10.10.128.82) during a trial session at the company’s test range. Installed on a PROM, the malware transferred control of the drone to the Red Team.

Description of the incident

            The incident was detected when the under trial drone AX10 was flown past the test range. During a test session of drone AX10, the engineers lost control of the vehicle to hackers in an unknown location through the home network. The hackers flew the drone to the company headquarters which is past the company’s test range. A firewall log analysis revealed a malware attack.

Description of affected resources

            The malware attack included the drone and R&D workstations, hence the Test range network (Test Range 10.10.128.0/24) and the (R&D Center 10.10.120.0/24).

 Incident category

            The Incidence was a malware attack facilitated by the employees (information system users).

Mitigating factors

            The primary mitigating factors was system restore that were set on R&D workstations and servers to enhance data and application recoverability. The use of windows operating systems, which have robust restore features enable data and applications recovery from the servers and workstations. Moreover, the use of a firewall between the R&D department and other company departments immensely assisted in the incidence detections (Nabil, Soukainat, Lakbabi & Ghizlane 2017). Analysis of firewall logs indicated the possibility of a malware attack. 

            Response actions performed

            Response to the security attacks was conducted in a series of actions that were conducted immediately after the incident was detected. The first step adopted was to unplug any PROM chips connected to workstations at the R&D department or on the drone. The PROM chips and the workstations at the R&D department were scanned using antivirus software. A system restore was conducted on the R&D workstations. The operating systems were then restored to a point before the attack was initiated to recover data and applications, hence recover system functionality.  

Other organizations contacted  

The company contacted Microsoft Inc., the manufacturer of windows operating system, to identify the system restore procedure. Moreover, the organization contacted Kaspersky, the manufacturer of the antivirus software installed on the company computers to establish more details on the nature of attack and the level of expected damages.   

  • Cause of the Incident

            The primary source of security incident is network vulnerabilities, especially from the WIFI network users. Company employees at the company headquarters are misusing information system access privileges, by storing confidential data on USB keys and sharing sensitive information with strangers. Moreover, the use of ineffective antivirus software enhanced the movement of the malware from one department to another without detection.    

  • Cost of the Incident:

            When compared to the previous information security incidences, the cost of the recent incident was considerably lower because it did not include a massive loss of data and business functionality. The organization was able to retrieve immense proportions of data and applications from the system restore points set before the attack. However, technical activities conducted in response to the attack were significantly costly to the organization. Specifically the disaster recovery efforts required 180 person hours, hence a budget of about $18, 000, without considering the aborted drone tests.

  • Business Impact of the Incident

            The attack included an immense impact on business functionality. The R&D department was technically down for more than twelve hours after the incidence was detected, hence paralyzing any planned drone tests. Moreover, the company is expected to suffer a significant reduction in drone sales, because of reduced customer confidence.  One of the considerable issues regarding the UAS (Unmanned Aircraft Systems) is loss of control and safety of the vehicle. Thus, news about the current attacks will probably turn-away a high number of customers, hence significantly reduce revenue.  

  • General Comments

            Sifers-Grayson needs a robust information security strategy to significantly reduce chances of such attacks in the future. Apart from having immense information security vulnerabilities, the company still possesses a poor incident detection mechanism. For instance, to conduct the attack, Red Team had to use an increasingly long attack vector, including almost every department in the organization. Moreover, it took an increasingly long duration of time to detect the attack (Rudd, Rozsa, Günther & Boult, 2017). Thus, the organization must implement an organization wide information strategy, including robust incident detection capabilities.

            One of the information security issues that the company must address involves handling of confidential information by employees and other stakeholders in the company. Confidential information found on the USB key collected on a lunch table facilitated the Red Team to access to the company network, hence eventually mount an attack. Moreover, employees usually share essential information with strangers (Ross, Viscuso, Guissanie, Dempsey & Riddle, 2016). In this regard, the organization is highly vulnerable to attacks conducted through the system user accounts (Lévesque, Chiasson, Somayaji & Fernandez, 2018). Therefore, to reduce the risk of future attacks, the company must create an effective system access plan, including essential employee training and a framework of access privileges. 

            Moreover, incident detection and response is extremely poor at Sifers-Grayson. The company cannot swiftly detect and respond to information security incidences. An effective SIEM tool, such as the LogRhythm’s SIEM package, can significantly assist the company in detecting information security incidences, because of its central approach to security information management (DFARS, 2019). Therefore, the contractor must identify an efficient SIEM tool to enhance tracking of incidences.

References

DFARS. (2019). 252.239-7000 Protection Against Compromising Emanations. Retrieved from https://www.acq.osd.mil/dpap/dars/dfars/html/current/252239.htm#252.239-7009

Lévesque, F. L., Chiasson, S., Somayaji, A., & Fernandez, J. M. (2018). Technological and Human Factors of Malware Attacks: A Computer Security Clinical Trial Approach. ACM Transactions on Privacy and Security (TOPS)21(4), 18.

Nabil, M., Soukainat, S., Lakbabi, A., & Ghizlane, O. (2017, May). SIEM selection criteria for an efficient contextual security. In 2017 International Symposium on Networks, Computers and Communications (ISNCC) (pp. 1-6). IEEE.

Ross, R., Viscuso, P., Guissanie, G., Dempsey, K., & Riddle, M. (2016). Protecting controlled unclassified information in nonfederal information systems and organizations (No. NIST Special Publication (SP) 800-171 Rev. 1 (Withdrawn)). National Institute of Standards and Technology.

Rudd, E. M., Rozsa, A., Günther, M., & Boult, T. E. (2017). A survey of stealth malware attacks, mitigation measures, and steps toward autonomous open world solutions. IEEE Communications Surveys & Tutorials19(2), 1145-1172.

Impacts of Cultural Differences on Business Organizations

Impacts of Cultural Differences on Business Organizations

            Effective communication highly determines the success of marketing efforts in culturally diverse markets.  An organization that seeks to understand and adapt to the communication demands of culturally diverse markets is more likely to experience success in terms of increased profits, revenue, market share, as well as a valuable workforce (Bovée & Thill, 2016). For instance, Japan and America possess significantly varied cultures, hence communication styles. In this regard, for an American company to establish success in Japan, which is a high-context culture, it must acknowledge, analyze, understand and adapt its communication approach to the Japanese, hence developing appropriate communication links with buyers and workers from the said country. Therefore, depending on the reaction to the communication challenges emanating from culturally different markets, a company can derive positive as well as negative impacts from diversity.

            Successful intercultural communication demands a company or an individual to overcome ethnocentrism and recognize cultural variations to enhance sensitivity to culture and diversity. Some of the necessary aspects of effective intercultural communication skills development include studying and respecting other languages through speaking, writing and listening carefully, and use of interpreters, translators and translating software to enhance cultural adaptability (Bovée & Thill, 2016). For instance Erin Meyer, the author of the culture map, had to learn to listen carefully, study and adapt to the Japanese communication culture through the help of a translator, her colleague from Japan, to effectively communicate with Japanese audience (The Lavine Agency Speakers Bureau, 2014). Therefore, firms must follow the due course, to create communication links with potential buyers and employees from their target culturally diverse market.     `   

            Communication is a necessary component of marketing, especially in the highly diverse global market. Organizations that follow the due process of developing effective intercultural communication skills to thrive in varying cultures around the world experience increased profits, revenue and market share. Companies that fails to create an effective method of intercultural communication cannot penetrate leave alone sell in a culturally diverse market. In this regard, such organizations encounter a wide range of challenges including market size restrictions, reduced revenue and profits Therefore, the impacts of cultural difference largely depend on a company’s response to communication challenges associated with varying aspects of diverse markets.

References

Bovée, C. L., & Thill, J. V. (2016). Business communication today (14th ed.). Harlow: Pearson Education Limited.

The Lavine Agency Speakers Bureau. (2014, December 10). Business Speaker Erin Meyer: How Cultural Differences Affect Business [Video file]. Retrieved from https://www.youtube.com/watch?v=zQvqDv4vbEg

Human Origin and Evolution

Human Origin and Evolution

The origin and evolution of human beings as a distinct species is a branch of biological anthropology with immense interest among researchers and students of anthropology today than any other time in history. Human evolution as part of biological anthropology aims to bring to light the development of the human race as a distinct species through the evaluation of fossils. Similarly, the study of human evolution aims at acquiring a clear understanding of similar traits with our closest primate relations on top of defining the social relationships with non-primate species. Lastly, the study of human evolution is important so as to understand how the process of evolution has contributed to the variations in the human species.  As a result, the only way to understand the biological structure and functionality of the present human beings is to study and closely evaluate the process of human evolution through the analysis and comparison of fossils in relation to the modern populations.

The origin of life is a highly debated topic in the field of science and related studies like anthropology and archaeology to name but two. It is estimated that the earth and other heavenly bodies were formed about 4600 million years ago. The earth was formed as a result of the reaction of different gases and other chemical elements and compounds present in space then. It is theorised that the chemical processes responsible for the formation of earth also produced simple celled organisms which form the foundation of the studies of the human revolution. As a result, it is believed life on earth began around 4600 million years ago from simple cell organisms to the present complex life forms.

Scholars from different fields of science with interest in defining the human origin and evolution the likes of Louis Pasteur a French microbiologist, A. J Oparin a Russian biochemist and the father of evolution Charles Darwin all agree on one principle; life originates only from pre-existing forms. These scholars argue that the conditions on earth 4600 million years were very different from the present day atmosphere. The Primordial earth a term used to refer to earth as it was 4600 million years ago was probably in gaseous state and through diverse chemical processes compounds condensed to form a solid matter called earth and simple one cell organisms similar to today’s cyanobacteria.  The cyanobacteria are photosynthetic organisms which release oxygen as a by-product and it is believed the bacteria are responsible for altering the earth’s atmosphere. The oxidizing effect provided the perfect conditions for the simple one cell organisms to develop so as to appropriately adapt to the existing environment. Correspondingly, this forms the basis of the biological definition of evolution which states that evolution is the gradual unfolding of new and complex forms of life from previous less complex forms over a long period of time.

For quite a substantive period there have been contradictory theories of evolution from a group of scientists that aim to explain the process of evolution through mathematical reconstructions. The most profound scholars of this view include Sewall Wright of the United States and Ronald Fisher and B.S Haldane of Great Britain to name but two. While their studies produced resourceful insights their results were unknown and complex for evolutionary biologists in the world of science. However, the recent past has witnessed the merger of these two groups through the use of genetics to explain the drastic change of life forms throughout the history of mankind. Contrarily, as biological anthropologists we will restrict ourselves to understanding the origin of life, the biological functionalities and behavioural patterns of present organisms in relation to past life forms from an evolutionary perspective.

The best approach to understand the human evolution is by first familiarising oneself with the basic principles of biological anthropology. Biological anthropology defines evolution as the gradual biological transformations of ancestral populations through morphological and genetic changes to become much modern populations. There have been many cases of evolution documented in nature and a good number of the cases show the differential success in reproduction in a population results to transformations of morphology and genetics in subsequent generations. Similarly, humans are said to share almost 99% of their DNA with apes and monkeys which is an indication of a common ancestry between the two species. Likewise, humans, apes and monkeys share a significant percentage of their DNA with bananas as well. This cements the fact that life is made up of the same amino acids and as such it developed from a simple one cell to the more complex and diverse present populations. Therefore, human beings just like all other creatures developed from simple cell organisms and adopt different forms due to their ability to respond differently to the ever changing environment.

There are two notable figures in the world of evolution namely Gregory Mendel and Charles Darwin. These two have immensely contributed to understanding the concept of human evolution the latter through his theory of Natural Selection and the former through his studies on genetic heredity. Mendel ran a series of experiments breeding peas with different traits over generations to lay the foundation for the modern day study of genetics. Mendel through his study realised that there were elements in cells that remained in cells even after subsequent breeding and referred to them as dominant characteristics. The opposite is what is referred to as recessive characteristics. Consequently, later it was discovered that all cells including in human populations behave in a similar manner as peas with both the dominant and recessive characteristics. As a result, the conclusions of Mendel have been adopted to solidify Darwin’s arguments of evolution to come what is referred to as the Darwin’s genetic theory of natural selection.

Mendel discovered four important principles of genetic heredity which act as reference in the study of the human evolution. First, he stated that the units of inheritance are discreet elements and remain so generation after generation without alterations even it may seem so at a glance. In his experiments he observed that the inside colour of peas was either green or yellow and no matter how many generations he crossbred they remained the same. Mendel referred to these units of inheritance as genes. Organisms inherit a single discrete unit or gene from each parent. Genes are located in a pair of chromosome with each chromosome pairing with an appropriate chromosome from each parent. In peas each chromosome carrying the colour gene pairs with an appropriate colour gene to produce either a yellow or green colour depending on the dominance and recessiveness of individual genes.

Mendel also discovered that different traits are inherited independently as long as genes are on different stems. Mendel concluded this after observing the different organs of the pea plant from the flowers to the stems. Mendel observed that whether a plant had long or a short stem had no effects on colour of the seed solely because the colour genes were located on different chromosomes. Mendel’s fourth principle states that even though each organism inherits a single gene from each parent some of the genes may recessive and thus fail to express. In the presence of dominant traits and recessive traits the dominant traits express though its recessive traits may express in subsequent generations. As such, a green coloured seed has green as the dominant trait and yellow as the recessive trait.

The principles of genetics have been widely used to explain the concept of evolution in the context of human beings. Researchers have discovered several dominant and recessive traits in human beings among them the ability or inability to roll the tongue and the ability or inability to taste a chemical known as PTC. Importantly, Mendel acknowledged that the environment can affect the inherited traits mainly through exposure to different strengths of sunlight, occurrence of diseases in early age or inadequate nutrition over several generations. Mendel’s principles of genetics formed the basis for scientific study of genetics at the micro and the molecular level to explain how human populations evolve over time. Science explains the evolution of a simple single cell to a larger multiple celled unit through a linear genetic model: atom to molecule (amino acid to protein or nucleic acid to DNA) to gene to chromosome to cell to individual body to population. It is through a complex process of protein synthesis that organisms assume different forms but the origin of all species is from simple single cell organisms.

Darwin’s theory of Natural Selection argues that individuals may give birth to off springs increasing their number but resources remained relatively stable. Darwin also observed that biological variation occurs among all species and variation is an inherited characteristic. Consequently, Darwin concluded that in the event that there was limited resources to sustain the present populations’ individuals with favourable variations for survival are likely to live longer passing on the favourable survival characteristics to its off springs.  The end result will be that there will be a population of individuals with favourable survival traits and with continued competition individuals that cannot keep up with the competition will be naturally selected out of populations in the3 long run making such groups extinct. Darwin clearly explains this through the example of the long neck of the giraffe. Darwin argues that a long time ago there were both short-necked and long-necked giraffes. However, due to the competition for food resources only the long-necked giraffes had the favourable survival trait and hence able to pass its long-necked trait to its off springs. Short giraffes due to inadequate food resources produced less off springs progressively getting less and less to the point of extinction.  Thus, Darwin’s theory of Natural Selection provides profound knowledge on the study of the human evolution and explains the different variations in the modern human populations.

Mutation is another important factor responsible for the process of evolution and as such requires extensive analysis and review. Mutation refers to the change of the genetic make-up of an organism and it only through the change of the genetic material of organism that they can transform to different forms. Mutation in an organism takes place when the genetic material is undergoing replication. The majority of mutations that become part of the evolutionary process take place at the base level of the DNA with chain effects in the behavioural patterns and biological structure of an organism.  Mutations are caused by both human related factors chemical elements in food substances or human induced radiation activities or from natural causes like global warming, cosmic radiation or the instability of the DNA.  Equally, mutation is among the fundamental factors that influence evolution not only in human populations but also in all other organisms since it is evident that all organisms evolved from simple single cell organisms.

Mutations and natural selection are responsible for the vast majority of macro evolutionary events, mutations introducing the new genetic material and natural selection producing the variability that evolution works on; but in the shorter run genetic drift or gene flow may be the major cause of change in some lineages.

How Fake News Affect Students

How Fake News Affect Students

In the advent of the current information era, fake news are being spread at an alarming rate, especially on digital social media platforms, such as Facebook and Twitter, hence leading to high levels of disinformation in the society. According to a study conducted by data scientists on Twitter from 2006 to 2010, the spread rate of accurate information is ten times slower than that of hoaxes and rumors. Moreover, a significant percentage of the general public, including students, faces immense difficulties in distinguishing fabricated and accurate news (Donald 1). In this regard, the problem of truth decay at the marketplace has been extremely amplified by the internet and online social networks.  The law is also not watertight; hence some perpetrators of fake news easily get away with the crime, while the relevant entities are not willing or able to do anything, but seem to adopt feeble solutions, such as fact-checking (Seetheraman 2). In this regard, internet and online social networks users including students are exposed to huge volumes of fabricated information, which they cannot easily recognize, hence cannot perform important activities, such as online research. Therefore, there is an urgent need to craft and implement new legislation outlining clear strategies to improve digital literacy and force digital platforms to adopt working solutions, as well as to ensure justice is administered to the criminals perpetrating act of spreading fake news.   

Truth decadence at the marketplace was a social issue even long before the inception of the internet and digital social networks, but the platforms in conjunction with modern technology have immensely escalated the problem. The tendency of people to credit the knowledge of others, because of the understanding that no one knows everything, leads to an information cascade dynamic, especially when such information is passed from one person to another. In this regard, while also considering the human being’s natural tendency to propagate negative and new information, the internet and the online digital platforms by tremendously increasing the rate of spreading information, have immensely amplified the issue of truth decay in the society. In addition, modern technology, such as machine learning have immensely enhanced the creation of realistic and undetectable fake content, such as ‘Deepfake Videos,’ hence significantly reducing the ability to distinguish between fabricated and authentic news. Previous studies confirm that not even students, who can be considered to be fluent on social media, cannot effectively judge the credibility of online information. In this regard, young learners are likely to fall prey to the massive volumes of false information on the internet. For instance, without the ability to differentiate between fake and credible sources of information, student cannot conduct meaningful online research, which is highly essential, especially to college students. In a similar manner as the rest of the society, students can be instigated into violence through conspiracy theories and rumors peddled on social media platforms. Regardless of the negative impacts of fake news on students and education, the responsible people and entities either do not understand the issue or are not unwilling to implement working strategies to mitigate the threat (Rosoff 1). On the other hand, according to the US constitution, falsity alone cannot be used to criminalize fake news unless a motive to induce legally acknowledged harm can be identified. Thus, the ministries of propaganda, evil demons and digital overlords have largely excelled in the spread of fabricated news, hence leading to high levels of disinformation, especially to among students.

A combination of precise regulations and increased awareness is one of the most efficient solutions to the problem of perpetrating fake news on social media, hence curbing the issue of escalating truth decay, which immensely reduces the reliability of the internet and online social media. Stringent legal measures need to be implemented to ensure that online platforms adopt proactive solutions to prevent the spread of fabricated information. The law should specifically require online social media platforms to take down accounts attributed to the spread of false news, where they can identify them. In addition, new legislation is required to ensure that perpetrators of criminal activities on the internet, especially rumor mongers, do not go unpunished. However, digital overlords and ministries of propaganda usually are armed with sophisticated technology and expertise to ensure that they cannot be easily unmasked, hence some of them will always be able to succeed in their endeavors. In this regard, legal institutions for coordinating nationwide awareness campaigns, as well as educating the public about distinguishing fake and legitimate news are required (Donald 3). For instance, the government can include digital literacy subjects in the school curriculum to ensure that students, who are easily duped by fake news, have substantial knowledge to identify credible sources of information. Therefore, the students will be able to distinguish fake from the legit news, hence mitigating their potential harm to education, as well as peaceful coexistence in the educational institutions.

Fact checking and labeling of fake news is perceived, especially by social media platforms owners, as an alternative strategy to prevent the spread of illegitimate news. Most of the online social networks quote the freedom of expression as the primary reason, which compels them to fail to implement stringent measure on the criminals spreading fabricated information on online social media, despite having the ability to identify a significant number of them. According to Mark Zuckerberg, Facebook founder and CEO, social media platforms are not the arbiters of truth, hence cannot prevent the spread of fake news by taking down accounts attributed to the undesirable act (Seetheraman 2). Zuckerberg indicates that the much social media platforms, such as Facebook, can do is to fact check and label unreliable sources as the potential grounds of fake news, hence letting the members of the society, such as students, to make decisions on the manner in which to treat such information. However, with the current low levels of digital literacy and the natural human appetite for negative and new information, such a feeble strategy does not seem to have the potential to address the humongous problem of fake news. Thus, the art of spreading fabricated news is expected to continue thriving on social media platforms, as well as duping students because users cannot easily recognize such information, while the platforms are unwilling to implement practical solutions to the problem.

The current rampant spread of fake news on the internet and social media platforms is a real threat to the social fabric, especially because of amplifying the problem of truth decay at the marketplace. Because of the increased volumes of fake information on social media, as well as the difficulty of recognizing, such news and their sources, students are expected to face immense challenges, especially in conducting credible research. On the other, social media platforms are unable or unwilling to implement practical solutions to address the issue. In addition, the current legislation possesses loopholes that allow the spread of fake news on the internet and digital social networks. For instance, criminals can spread propaganda and misleading information while under the guise of the freedom of expression provided an ulterior motive to cause legally acknowledged harm cannot be identified. The general ability to distinguish fake from legitimate sources of information is also too low especially among students. Therefore, to adequately address the problem of fake news, hence the escalating the educational challenges, stringent legal measures to compel digital social media platforms to implement working solutions, punish perpetrators, and incorporate awareness content on the subject in the school curriculum are required.  

Works Cited

Donald, Brooke. “Stanford Researchers Find Students Have Trouble Judging the Credibility of Information Online.” Stanford Graduate School of Education, Stanford, 15 Dec. 2016, ed.stanford.edu/news/stanford-researchers-find-students-have-trouble-judging-credibility-information-online.

Rosoff, Matt. “Mark Zuckerberg Still Doesn’t Understand the Problem with Fake News.” CNBC, 20 June 2018, www.cnbc.com/2018/07/18/mark-zuckerberg-and-the-infowars-contradiction.html.

Seetharaman, Deepa. “Facebook Drowns Out Fake News With More Information.” WSJ, The Wall Street Journal, 3 Aug. 2017, www.wsj.com/articles/facebook-drowns-out-fake-news-with-more-information-1501754403.

How did the major U.S. wireless service providers become an oligopoly?

How did the major U.S. wireless service providers become an oligopoly?

Introduction

An oligopoly is a large firm that sells rare products and services in the market. Oligopolistic corporations face competition from a few corporations that sell similar products or services.

Answer and Explanation

 Initially, the FCC licensed A and B carriers, which were set up by an auction process. After a while, a high number of independent carriers of type A which were not linked to local telephone companies of type B were consolidated.

The wireless firms were started as oligopolies because the fixed cost of the mobile phone network was very high at the time. In the mid-1980s, the cost of cells phone was about 100 USD, which is about 300 USD today. AT&T was divided into RBOCs (regional phone companies) with wireless allocations that had a near-zero value. With time, mergers consolidated several components of the old Bell systems.

A drop in prices and an increase of cell phone popularity enhanced the reduction of rates, which yielded a virtuous cycle, by allowing the spread of capital costs among a high number of users. New technologies and new cellular bands were later licensed to serve more customers and provide more completion.  Currently, only four major carriers serve millions of customers in the United States.





GoldieBlox Case Study

GoldieBlox Case Study

  1. Of the factors that influence consumer behavior, which category or categories (cultural, social, personal, or psychological) best explain the existence of a blue toy aisle and a pink toy aisle? Why?

            From the Goldieblox Case, it is clear that consumer behavior is influenced by social, cultural and psychological factors. Before Debbie sterling developed the GoldieBlox, people used to buy toys within the cultural setting of the blue toy aisle for boys and the pink toy aisle for girls. Although the GoldieBlox is unique, it is a modified form of the pink toy aisle, hence adhering to the cultural order. Sterling had to maintain the pink meet the cultural as well as social expectations for the customer to embrace her product. However, her modification transcend the social and cultural expectation, by incorporating features, such as erector and the storybook parts, that was conventionally absent in the pink aisle toys. The erector part indicates that consumer behavior is influenced by psychological factors, because culturally, such features were expected in the blue toy aisle for boys. Thus, it is clear that the consumer behavior is influenced by psychological, cultural and social factors.  

  • Choose the specific factor (for example, culture, family, occupation, attitudes) that you think most accounts for the blue/pink toy aisle phenomenon. Explain the challenges faced by GoldieBlox in attempting to market toys that “swim against the stream” or push back against the forces of that factor.

Culture mostly accounts for blue/pink aisle because they are created and maintained to meet the conventional social expectations for the male and the female gender. Culturally, the blue is associated with boy and pink with girls. Even Sterling had to preserve the colors to at least meet some of the cultural expectations. Because of including some new features in GoldieBlox, hence slightly moving out of the cultural order, Sterling is surprisingly accused of window dressing the issue of gender stereotyping by feminists. Therefore, it is culture that mostly accounts for the preservation of the social division of gender in blue/pink aisle.

  • To what degree is GoldieBlox bucking the blue/pink toy aisle system?

GoldieBlox is mildly opposing the Blue/pink aisle system, because while it preserves some of the expected feature, it introduces new ones that are totally against the blue/pink aisle division of gender. Some of the new features, such as the storybook, included in the GoldieBlox are still in line with the social setting for the girl child. Moreover, the GoldieBlox and the parade include the creation of a float to ferry the winner of a beauty pageant, which is mostly associated with female gender in a similar manner as the pink color. However, incorporating activities, such as building a float, an activity traditionally associated with the male gender, is totally against the blue/pink norm. Sterling includes such tasks and aspects with an ulterior motive of inspiring girls to venture into careers traditionally preserved for boys, but not to overhaul the gender division. Thus, GoldieBlox is not totally against the social division of gender in blue/pink aisle, it is just against some elements of the division, which affect the girl child in future career life.

  • If GoldieBlox succeeds at selling lots of its toys, will that accomplish the mission of increasing the presence of females in the field of engineering?

            If GoldieBlox performs well in the toy market, it will definitely increase the presence of females in the realm of engineering, because the existing social division is largely psychological. Girls from an early age will start interacting with the physical world, as well as develop a passion to create structures by engaging in the new tasks introduced by Sterling in GoldieBlox. By preserving some feminine aspects of the pink aisle, which include color, and including new ones, such as the storybook, which girls like, there is a high chance that girls will love GoldieBlox. In this regard, girls will be able to engage in and explore the new tasks and aspects created by sterling with the intention of luring them into a part of the world that was previously preserved for boys. If girls can start engaging in such activities at an early age, they will definitely develop an affinity with them and most likely pursue them in adulthood. Therefore, GoldieBox will most likely increase the number of females in engineering field if it succeeds in selling lots of its toys, which, like boys, start preparing girls for such career at a very tender age, by giving them the appropriate mindset and environment.