Big data analytics

Big data analytics


Big data analytics typically involves high-performance analytics technologies, data mining, predictive models, machine learning, natural language processing, and statistical algorithms to process big data – data sets characterized by high levels of volume, variety, veracity, and velocity. Leveraging big data helps extract meaningful information essential to making faster and better decisions (Dash, Shakyawar, Sharma, & Kaushik, 2019)). Healthcare big data analytics plays a major role in the delivery of personalized and value-based care, clinical risk management, care variability mitigation, and standardized internal and external reporting (Huang,  Mulyasasmita, & Rajagopal, 2016; Peddoju , Kavitha, & Sharma, 2019).

With growing adoption of eHealth, mHealth, telehealth, health monitoring wearables, patient portals, and other healthcare information systems, the volume, variety, veracity, and velocity of consumer, patient, and clinical data will constantly increase. Appropriate healthcare big data analytics is needed to extract actionable insights and make strategic decisions based on such data (Peddoju et al., 2019).

In the developing world, pediatric pneumonia is identified as one of the major causes of deaths among children (Peddoju et al., 2019). To avoid life-threatening pneumonia-induced complications, it is important to identify the children at risk of treatment failure in a timely manner. In case pneumonia symptoms are detected, prompt medical attention should be sought to ensure accurate and timely diagnosis is made and proper treatment given. Peddoju et al. (2019) highlight the need to analyze the history of patients’ records, doctors’ expert opinions and previous treatment for similar symptoms, and cloud-based diagnostic reports (such as X-ray and blood tests) to support quick and accurate decisions regarding pediatric pneumonia interventions.

Role of Network and/or Telecommunications Technologies

Peddoju et al. (2019) recommend the use Mobile Cloud and Big Data analytics to supporting the collection, storage, and analysis of large and complex healthcare data towards generating actionable insights.  In the context of treating pediatric pneumonia, cloud computing is identified as one of the major supporting technologies. Generally, cloud computing refers to an interconnected network of diverse types of devices that deliver three main categories of services, namely software-as-a-service (SaaS), platform-as-a-service (PaaS), or infrastructure-as-a-service (IaaS). Users acquire the cloud-based services on-demand (Peddoju et al., 2019). The approach proposed by Peddoju et al. (2019) assumes that all stakeholders (especially the doctors, staff, and patients) are able to share information via the cloud. Furthermore, the cloud is used to interconnect diverse doctors’ information, especially diagnosis and treatment reports.

In the studied case, the cloud is primarily used as the basis for storing the childhood pneumonia patients’ data. The patient-related information captured in each hospital is stored in the cloud. Then, the entire collection of pneumonia data is integrated via the cloud. The data stored in the cloud can be readily shared and accessed by all stakeholders across the world at any time. Importantly, the pneumonia treatment data is stored in the cloud so that any doctor can use it to make quick and precise decisions regarding how to treat pediatric pneumonia cases at hand.  Doctors are empowered to choose the best possible course of actions from the readily available data. Furthermore, the cloud-based data storage approach allows doctors to readily share their views opinions regarding pediatric pneumonia symptoms and treatment interventions. To accomplish this, doctors are allowed to update the cloud-based repository with any relevant information.  Therefore, use of a cloud-based information repository could contribute to knowledge creation and sharing.

The primary motive for adopting cloud-based treatment is its quick service provisioning, on-demand usage patterns, and significant cost savings. Cloud computing contributes to the implementation of an integrated IT infrastructure that enhances the overall data sharing potential between hospitals, laboratories, and government agencies among other key healthcare stakeholders. Moreover, the cloud implies an opportunity to amass data generated from eHealth and mHealth technologies. Integrating healthcare applications, services, and data into the cloud could support remote patient monitoring (Peddoju et al., 2019). Consequently, the overall need for hospital admission is significantly reduced.

Critical Success and/or Failure Factors

Peddoju et al. (2019) opine that learning from the past could create opportunities for better results in the future. Using Mobile Cloud and Big Data analytics, it is possible to proactively identify the pneumonia symptoms among children so that timely and precise treatment can be given. Consequently, the proactive intervention against pneumonia could reduce the incidence of life-threatening pneumonia cases. In addition, unnecessary hospital admissions and re-admissions can be avoided. In turn, significant healthcare cost savings could be realized.

Maintaining different forms of health-related data, including structured, semi-structured, and unstructured, implies an opportunity to leverage big data and gain greater insights into healthcare issues of interest. Notably, close to 80% of health data is inherently unstructured, yet it could contain important cues related to healthcare needs and potential interventions. A study conducted by McKinsey Global Institute showed that creative and effective use of big data in the U.S. healthcare could generate value in excess of $300 billion every year. Cost reduction is the primary form of value created by healthcare big data analytics (Belle et al., 2015). To have a comprehensive perspective of disease states, an aggregated strategy should be used whereby structured and unstructured clinical and non-clinical data is analyzed. Without strong aggregation of all available data sets, it would be difficult to consider every potential marker for medical assessment (Belle et al., 2015). Therefore, all forms of health data should be leveraged to increase the quality of generated insights and associated healthcare delivery decisions.

Digitization of patient data by healthcare organizations plays a major role in enhancing the ease and accuracy of data analysis. In addition, digitization makes it easy to collect, store, and distribute data across diverse digital platforms (Peddoju et al., 2019). However, to derive optimal value from the vast digital data, a long-term storage solution should be implemented. In addition, fast and accurate statistical algorithms are needed to facilitate decision-assisting automation. For example, medical images constitute a major source of data often used to support diagnosis, assessment, and planning missions. Magnetic Resonance Imaging (MRI), X-Ray, computed tomography (CT), ultrasound, fluoroscopy, and photo-acoustic imaging are some of the common clinical imaging techniques.  Nevertheless, uncompressed images often require large storage and processing capacities (Belle et al., 2015; Peddoju et al., 2019).

Cloud computing is essential to ensuring “anytime, anywhere” data availability. Consequently, cloud solutions support enhanced accessibility and sharing of health data. The cloud ensures information confidentiality and availability through mechanisms such as multi-factor authentication, data encryption, data tokenization, and strict SLAs (Bamiah, Brohi,  Chuprat, & Brohi, 2012; Peddoju et al., 2019). The importance of cloud computing in healthcare is underscored by the need to centrally store patients’ data in the cloud from where it can be readily accessed. Centralized storage implies that healthcare providers have a single “source of truth” as the basis for becoming more data-driven and patient-centric. Centralized data storage and management minimizes the incidence of errors. Furthermore, cloud computing provides on-demand, elastic, and cost-efficient data services. 

Cloud computing and service-oriented architecture (SOA) may be used to efficiently provide scalable, interoperable, and integrated healthcare IT infrastructures. With cloud computing and SOA, it is possible to deliver integrated telemedicine, medical imaging, and electronic medical records (EMR) applications and services. In addition, an integrated IT infrastructure supports the activities of all the key stakeholders involved in healthcare services. For example, an integrated cloud-based application creates an opportunity for enhanced collaboration between patients, healthcare practitioners, researchers, insurers, and public health officials (Belle et al., 2015; Peddoju et al., 2019). Consequently, the incidence of data inconsistences and medical errors is considerably reduced.

The distributed nature of cloud environments necessitates robust information security and data protection measures. With cloud-based services, there are significant data breach threats and risks. The issue is especially complicated when personal data need to be stored or shared.  Even worse, working with health-related big data could make it difficult to detect incidents of stealth data theft (Bamiah et al., 2012). In addition, the successfulness of cloud-based pediatric pneumonia treatment is significantly dependent on how well sensitive data is protected against potential breaches against data confidentiality and integrity. As an example of a data protection technique, Belle et al. (2015) cite iDASH as a viable approach to integrated data analysis and sharing while anonymizing biomedical data for privacy preservation purposes. The Health-e-Child consortium is another effort geared aimed at supporting secure integrated data analysis and sharing (Belle et al., 2015).

The proposed cloud-based data storage approach to pediatric pneumonia monitoring and treatment requires doctors to share their views and update the centralized repository accordingly. While this could contribute to the cause of knowledge creation in the healthcare domain, it brings about the challenge of introduction of unreliable information (Peddoju et al., 2019). While vast cloud-based medical repositories could help identify potential treatment procedures, it could be difficult to identify false information contained in those repositories. Therefore, trust is a fundamental cloud computing requirement. This makes it important to devise strategies for evaluating the trustworthiness of doctors’ opinions. A rating functionality should be implemented to allow doctors to appraise the pneumonia intervention suggestions made by colleagues. This could help overcome the challenges caused by erroneous data. Furthermore, the credibility of doctors making recommendations may be difficult to verify. To overcome credibility concerns, doctors’ professional qualifications and contact details ought to be provided to facilitate clarification. Another challenge takes the form of inadequate technical and human resources to support healthcare big data initiatives (Peddoju et al., 2019).

Network and/or Application Strategy Utilized in the Case

Figure 1 illustrates the healthcare system proposed by Peddoju et al. (2019). The proposed healthcare model comprises of the following layered components (Peddoju et al., 2019):

  • The cloud computing platform: To store and integrate childhood pneumonia patients’ data obtained from different sources. In addition, the cloud provides a scalable and on-demand infrastructure to support the ever-increasing big data digitization and processing needs.
  • Childhood pneumonia patient information: Typically include digitized genomic, clinical, patient behavior, healthcare publication, and administrative and business data that make up healthcare big data. Predicting childhood pneumonia cases requires an aggregated strategy whereby structured, semi-structured, and unstructured data that stem from diverse clinical and non-clinical modalities is used to indicate the need for clinical intervention. 
  • Big data integration and analytics tools and algorithms: Required for ingestion, integration and clustering, and distributed processing of structured and unstructured childhood pneumonia patient’s data across multiple cloud-based nodes. Hadoop that uses MapReduce is an example of a tool that supports  rapid analysis and transformation of large data sets. Other analytics that are employed in the proposed system include Cassandra, MongoDB, Informix, Ingres, LucidDB, Teradat, and Interbase.
  • Data discovery and visualization: Visual design tools for processing patient data. Importantly, doctors need to readily update the cloud-based database with relevant information. The system may implement Predictive Modeling Mark-up Language (PMML) for visualizing big data analytics results and reports.
  • Predictive analysis: Powerful algorithms required for fast generation of decision-supporting information. Applicable algorithms may include classification, regression, association, and clustering.
  • Enterprise and ad-hoc monitoring and reporting tools: Stakeholders (doctors, staff, patients, and parents and/or guardians) will publish information via the cloud, and access reports generated by big data analytics via the system’s presentation layer.

Figure 1:  Application strategy used in the case (Peddoju et al., 2019)


The proposed approach to childhood pneumonia monitoring entails relying on the history of patients’ records, doctors’ experience and previous treatment for similar symptoms, and diagnostic reports (such as imaging data and blood tests)  from the cloud  in addition to big data analytics tools and techniques to enable doctors to make quick and precise pediatric pneumonia treatment decisions. Primarily, cloud computing is used to allow for “anytime, anywhere” data availability and accessibility. In addition, the cloud provides a centralized health-related data repository that can be readily updated by authorized doctors. In Peddoju et al. (2019)’s model, big data analytics tools and techniques are implemented  in the cloud. These include aspects of data preparation, modeling, and distributed processing as well as predictive analysis, data discovery, and visualization.

A number of improvements can be made to the pediatric pneumonia monitoring and treatment approach proposed by Peddoju et al. (2019). To start with, IoT technology ought to be integrated with the proposed healthcare model to optimize the use of real-time data streams. Here, IoT-based devices will automatically monitor the patients’ health conditions and send the information to the cloud-based data integration and analytics platform. Then, a pneumonia monitoring system will integrate the  patient-related data via the cloud. An analysis of the continuously generated patients’ data could help gain crucial insights into patients’ health statuses and provide relevant diagnostic and treatment information to concerned patients and parents or guardians in a timely manner. With automated IoT-based pneumonia monitoring capability, it would be possible to provide real-time alerts and more personalized medical care. Figure 2 shows the modified diagram that illustrates the improved solution.

Figure 2: Application strategy with the proposed improvement


Bamiah, M., Brohi, S., Chuprat, S., & Brohi, M. N. (2012, December). Cloud implementation security challenges. In 2012 International Conference on Cloud Computing Technologies, Applications and Management (ICCCTAM) (pp. 174-178). IEEE.

Belle, A., Thiagarajan, R., Soroushmehr, S. M., Navidi, F., Beard, D. A., & Najarian, K. (2015). Big data analytics in healthcare. BioMed research international, DOI:

Dash, S., Shakyawar, S. K., Sharma, M., & Kaushik, S. (2019). Big Data in Healthcare: Management, Analysis and Future Prospects. Journal of Big Data, 6(1), 54-73.

Huang, B. E., Mulyasasmita, W., & Rajagopal, G. (2016). The path from big data to precision medicine. Expert Review of Precision Medicine and Drug Development, 1(2), 129-143.

Peddoju, S. K., Kavitha, K., & Sharma, S. C. (2019). Big Data Analytics for Childhood Pneumonia Monitoring. In Web Services: Concepts, Methodologies, Tools, and Applications (pp. 1129-1145). IGI Global.

Executive Summary

Executive Summary

This report presents a comprehensive analysis of the Surry Hills Library and Community Centre in terms of the building’s structure and construction. The Surry Hills Library and Community Centre is located in a constrained site bound by Crown Street (the major ‘street of Surry Hills’) to the east and ‘two residential streets’ to the west (Norton Street) and south (the Collins Street). It sits adjacent to several residential apartments, shops, commercial and industrial premises, and terrace housing. Though the buildings differ in scale, they feature a predominant Victorian architectural style.

A number of technologies are employed in different elements of the building, including the following: screen cladding system, inactive and active/operable timber louvers, windows with automated blinds, integrated sustainability tools (such as green roof, labyrinth, PV panels, and the atrium), active windows, BMS, CABS, and transitional foyer space. Precast concrete, timber and alpolic metal composite panels, timber veneers and acroma coatings, and polyurethane are the mostly used construction materials. Sustainability and transparency are the key innovative initiatives employed. TheSurry Hills Library and Community Centre can be regarded as an excellently considered building in terms of structure and construction.

The center has established itself as a sustainable and integrated building with all the functional and aesthetic principles geared towards the fundamental goal of performance – naturally ‘grown’ and tempered air, outstanding interior environmental quality, and increased energy savings driven by the atrium section among other systems.

  1. Introduction

Sydney, Australia is home to several memorable urban places overwhelmingly comprised of sporting and recreational landscaped spaces. Generally, the spaces are waterfront parklands, urban commons, and community facilities intended for the rapidly growing population in the City. Recent public works in Sydney deliver architectural components that share a unique admiration – comprising of ancillary structures and remnant architectural or archaeological elements submerged under the landscape. The characteristic urban architecture where the neighboring landscape setting is amplified to suppress the expression of public and civic function has become increasingly common across recent Sydney’s public buildings (Hardning 2010).

Designed by Francis-Jones Morehen Thorp (FJMT), the Sydney’s Surry Hills Library and Community Centre assumes an atypical concept in the contemporary collection of works because it reasserts the City’s parallel practice of open expression of public architecture.  The building exploits a wide array of architectural sensation to form a compelling urban icon. This way, Surry Hills Library and Community Centre represents an exceptional public architecture that supports the following major functions: library, community rooms, and kitchens, classes (like language, computer, and cooking), childcare, café, and reading room (Hardning 2010). These functions have seen the building drive unparalleled social and ethical diversity.

This report provides a detailed analysis of Surry Hills Library and Community Centre in terms of the building’s structure, construction, and detailing. The analysis entails aspects of the building’s siting, adopted structural and construction technologies, construction materials and innovative technologies used, and a critical evaluation of the building regarding structure and construction, environmental strategy, economy, occupant amenity, and construction best practices.

2.0 The building’s siting

The Surry Hills Library and Community Centre is located on 405 Crown Street, Surry Hills, Sydney as a replacement to an ordinary community facility that completed in 1956. Since the 1950s, Surry Hills has seen various business sectors and communities flow the area. Therefore, the site required a finely considered plan to ensure that the new library and community center would be iconic while simultaneously meeting the needs of the client and target occupants (City of Sydney 2016). The building compactly fills its constrained ‘25 by 28’ meters site and it amplifies its scale by exploiting all possible architectural elements to form a marvelous public facility. It sits opposite the Shannon Reserve where weekly markets are held and the Clock Hotel, terminating a well-established ‘street-wall condition’ (Hardning 2010).  The site is bound by Crown Street (the major ‘street of Surry Hills’) to the east and ‘two residential streets’ to the west (Norton Street) and south (the Collins Street). Therefore, three edges bind the building in the form of roads (as shown in Figure 1). Surry Hills is a Sydney’s inner-city suburb with a community characterized by income, age, and cultural diversity. The architectural context also exhibits massive diversity – residential apartments, shops, commercial and industrial premises, and terrace housing that differ in scale but feature a predominant Victorian architectural style (Gollings & Chung 2010).

Figure 1: The center’s site plan (Brady 2014)

As a ‘global’ city, Sydney has an urban architecture facing pervasive fear and the City Council strongly senses this.  The City has dense inner-city areas that the council refers to as ‘urban villages’ to pacify the vocal community that considers buildings with more than two floors high-density.  Some critics are opposed to the appropriateness of the community center’s scale in relation to the local context suggesting that the City Council has permitted a development in a site where it would have denied any private developer a similar largesse.  Crown Street constitutes one of the crucial urban streets in Sydney and its structure and could readily host this type of a building. In fact, the center has improved the overall image and meaning of neighboring facilities such as Shannon Reserve. Therefore, fear of certain urban architecture may make it difficult to deliver exceptional public architecture such as the Surry Hills Library and Community Centre (Hardning 2010).

3.0 Structural and construction technologies adopted in different elements of the building

3.1 Wall system and cladding

Externally, the building is clad with mainly active or operable timber louvers (for the eastern façade) in addition to a combination of active and inactive panels (for the southern façade). The southern glass façade is the fundamental structural feature of the building and it serves as the environmental atrium. The western façade is largely enclosed and it inactively or reservedly rests in the small Norton Street scale (Brady 2014). Therefore, the exterior wall (cladding) of the building is made up of lightweight timber and glass panels that keep out or deflect rainwater. Moreover, there is a cavity that eliminates potential moisture. Ultimately, the façade or rain screen cladding system provides ventilation and insulation to minimize direct solar gain in relation to the building (Blundell 2009).  Figure 2-5 shows the four major facades.

Figure 2-5: Northern façade (Brady 2014)

Figure 3-5: Eastern façade (Brady 2014)

Figure 4-5: Southern facade (Brady 2014)

Figure 5-5: Western facade (Brady 2014)

3.1.1 Louvers

The vertically-placed ‘timber veneer clad louver’ system is designed to enclose the three sides of the building. It stands out as an architectural landmark and also improves aesthetics (Brady 2014).

The design addresses each facade uniquely with varying number of active louvers, windows, and panels. The ‘north’ elevation is wholly flat and it rises above the existing adjacent buildings. The ‘east’ facade has majority of perforations and is the most active to reflect the Crown Street’s business. The ‘west’ facade combines the features of the ‘north’ and ‘east’ facades – flat with only an insignificant number of window openings and active components exposing the circulation shaft (Brady 2014).

The windows implement automated blinds and highly efficient glazing to minimize potential energy wastage and severe glare (Meinhold 2010).

An automated system runs the louvers to make them move in response to the sun. This prevents discomforts such as extreme glare that may be caused by excessive exposure to direct sunlight. Moreover, views from the interior vantage locations change constantly throughout the day (Brady 2014).

The louvers link the Surry Hills Library and Community Center to its site because the facades change dynamically, reflecting the elements surrounding the building. The louver implementation helps sustainably maintain a comfortable indoor environment, reducing the need for wasteful and costly heating and cooling systems (Brady 2014). Figure 6 shows the vertically-placed active louver system while Figure 7 shows the louver system with details of the materials used.

Figure 6: The operable louver system (Brady 2014)

Figure 7: The louver system with details of materials used (Brady 2014)

3.2 Integrated sustainability technologies

The following are the major sustainability technologies implemented in the Surry Hills Library and Community Centre (Gollings & Chung 2010; Mackenzie 2010; Meinhold 2010):

  • Green roof:to increase thermal mass and reduce heat gain to the center. Natural grasses planted on the rooftop helps decrease energy loss by insulating the building envelope. More specifically, the green roof helps in cooling and insulating the building during summer and winter respectively, which contribute to a healthy indoor environment and energy savings.
  • Labyrinth: to passively filter and tamper the air. 
  • Rainwater storage tank: to collect, treat, and supply water closets and irrigate the landscape, atrium plants, and the lawn on the Collins Street Reserve. All the tap fixtures are optimized for touch sensitivity to minimize potential leakages and wastage.
  • Fan coil systems: to trim fresh air and satisfy the building’s cooling and heating requirements.
  • PV panels: to offset grid electricity demands and shade the roof.
  • Bio-filter: plants and bio-mass absorb carbon dioxide and release oxygen to passively filtrate and remove air pollutants.
  • Underground geothermal heat exchanger: to transfer energy drawn from the ‘earth to the building’ and passively tamper incoming atmospheric air to the bio-filter.
  • The environmental atrium: it combines the facility’s integrated ‘respiratory system’ with the main façade, imitating organic and biological models. This way, the building has an ‘external skin’, a ‘structural’ skeleton, and integrated conduits for transferring energy and oxygen. In addition, the conduits propagate information throughout existing nodes while leveraging solar power, rainwater, heat drawn from the earth, and oxygen generated through photosynthesis as part of sustainability efforts.
  • Sensors to control lighting by automatically turning off the lights when nobody is around and vice versa.  

Figure 8 shows the major sustainability technologies used in the building.

Figure 8: The major sustainability technologies (Brady 2014)

These integrated sustainability initiatives drive the following major benefits (Mackenzie 2010):

  • Improved energy conservation: the passive, active, and hybrid systems of bio-filters, labyrinth, PV panels, rainwater harvesting, storage, and recycling, and geothermal energy helps minimize reliance on grid power, gas, and water  thus facilitating greater energy savings.
  • Better psychological wellbeing in relation to users/occupants as they can see what is happening within the facility even when they are outside the center. Psychological benefits are further fuelled by the properly ventilated internal environment using an air quality system based on a unique bio-filtration mechanism.
  • Considerable space savings because of optimal arrangement of several multifunctional systems, which helped address the problem of a constrained site.
    • Automated building management and control system (BMS)

A computerized BMS is implemented to automate the process of monitoring and controlling the center’s internal environmental conditions in addition to adjusting the cooling, ventilation, and sunshade louver systems throughout the day. This way, the BMS controls lighting and shading as well as heat load. Moreover, the BMS monitors and logs water and electrical systems to detect potential faults and optimize the building’s environmental efficiency (Gollings & Chung 2010).

3.4 Climate adaptive building shell (CABS)

CABS refers to a construction engineering concept where facades and roofing structures dynamically relate with changes in environmental conditions (Loonen et al. 2013). The Surry Hills Library and Community Centre adopts such a dynamic building envelope that change with variability in weather and occupant needs. This contributes to considerable energy savings (lighting, heating and cooling, and ventilation) and improved quality of indoor environment with respect to the center (Brady 2014).

The eastern elevation is made of operable timber-faced louvers in that they act in response to the sun ‘movement’.  This elevation provides glances of occupation through the series of timber-faced panels designed to automatically swing throughout the day to animate the streetscape. In addition, the approach leads to improved air circulation and inflow of daylight (Gollings & Chung 2010).  Light fixtures at the eastern elevation deliver sufficient natural lighting throughout the facility along with properly controlled daylight (Brady 2014). Another aspect of CABS is the automated open space sun shading system at the childcare open space (Nyren 2011).

3.5 The suspended U-shaped timber form

The ‘U’ timber form is designed to embrace the prismatic glass atrium. The form orients in the southern direction and the converted modest park. Automatic louver systems constitute the major material that make up the solid components of the timber form, which filters and controls sunlight and the general view (Gollings & Chung 2010). Suspending the timber form above the ground facilitates accessibility and transparency. Figure 9 shows the ‘U’ timber form.

Figure 9: The ‘U’ timber form (Gollings & Chung 2010)

3.6 The transitional foyer space

The foyer space represents a lower intermediary form designed to mediate the center’s scale against adjacent buildings while at the same time creating a transparent and inviting entry. The lifted cloud-like roofing profiles draws sunlight into the foyer space and spread out the street to signify the main entrance (Gollings & Chung 2010). A unique spiraling staircase contributes to the architectural elegance of the internal design. The spiral stair creates different spatial zones for linking and contrasting the three upper horizontal levels. The spiral staircase is shown in Figure 10.

Figure 10: The spiral staircase (Gollings & Chung 2010)

3.7 The environmental atrium

Perhaps the atrium built on the southern facade is the most spectacular feature of the center. The facade is transparent to encourage passersby to visit the building. The glass is accompanied by paintings to hide the indoor and outdoor boundary, which extends the buildings into the ‘Collins Street’ courtyard. It forms an integral element of the very innovative external air intake, processing, and supply system. This is because the atrium has triangulated internal cavities or arteries where a selection of plants filters the air in a technique known as biomimicry (Mackenzie 2010).

As a result, the system delivers oxygen concentrations in the range of 22% to the library. Most buildings have concentrations of 12-15%, thus the atrium-based system provides outstanding levels of air quality. Other than sustainability benefits, the atrium create a visually appealing and healthy space for prolonged reading and kids playing (Brady 2014). Figure 11 shows the atrium structure and some of the specially chosen plants.

Figure 11: The atrium (Brady 2014)

The innovative glazed wall employed on the southern façade covers the entire above-ground levels. At the top, the wall has triangulated chimneys that draw air using different bio-filtering and conditioning techniques, while at the same time revealing the center’s stratified section (Hardning 2010). The atrium is symbolic of the building as it identifies the public center and place (Brady 2014).  Figure 12 shows the atrium section.

Figure 12: The atrium section (Brady 2014)

4.0 Construction materials and innovative construction techniques employed

4.1 Construction materials

Facades are made from precast concrete, pre-fabricated steel and alpolic metal composite panels, and Prodema’s highly resistant natural wooden panels. Prodema is PEFC accredited, which assures that its products are made from environmentally and socially managed forest materials. Therefore, the building stands a better chance to meet green construction certification requirements such as the ECOdesign ISO 14006, Leadership in Energy and Environmental Design (LEED), or Building Research Establishment Environmental Assessment Methodology (BREEAM). The wooden louvers evident at the Surry Hills Library and Community Centre perfectly complement the building’s ventilated facades. They cover surfaces such as glass spaces and help control sunlight and energy consumption in the facility.

Precast concrete and pebblecrete are used for flooring and footways. Tailor made pebblecrete are used to create a compelling and lasting impression. PEFC certified Victorian timber veneers are also used for flooring. These are hardwood veneers hand-selected manufactured from sustainably controlled forests. In addition, terrazzo Australian marble and cement are precast and used for flooring to help bolster lasting beauty, maintainability, uniqueness, and durability. Functional and environmentally responsible hardwood with a rich color spectrum is used for floor covering.

Walls are made from a combination of precast concrete. Timber veneers and acroma coatings are also used, but for interior walls and ceilings. Acroma coatings are especially useful for finishes as they come with low VOC, low odor, and unparalleled attractiveness as they are smooth and come in many colors. Therefore, the coating materials used help bolster the users’ health by assuring indoor air quality. Internal walls are also made from sliced decorative veneers, rock maple, and tiles.

The building has finely executed concrete finishes, polyurethane joinery (white), stone tiles, and high-quality carpet. Locals feared that the high-quality finishes would be quickly damaged (Hardning 2010). However, the architects were able to include the finely designed finishes in a way that befitted the center’s public importance.

4.2 Innovative construction techniques employed

4.2.1 Sustainability design initiatives

This project establishes a new standard of environmental sustainability excellence in Australian civic buildings by integrating several ESD innovations into public architecture. The center includes several environmentally sustainable design (ESD) initiatives, for example, rainwater harvesting and recycling, geothermal cooling bores, automated fabric shading, PV arrays, thermal labyrinth, green roofing, solar array, sun-tracking timber-based louvers, and bio-filtration atrium (ALTUS PAGE KIRKLAND n.d.).

The tapered prismatic glass atrium is one of the innovative systems included to meet the project’s ambitious environmental sustainability objectives. The atrium’s chain of triangular airshafts collects atmospheric air and cools it passively. Natural sunlight is also filtered through the glass layers into the center’s interiors (Gollings & Chung 2010). Figure 13 shows the prismatic glass atrium.

Figure 13: The prismatic glass atrium that implements the transparent façade (Gollings & Chung 2010).

A fundamental objective of the project was assuring optimal air quality to occupants since they require adequate alertness levels to facilitate learning and children using the childcare facilities assured of a healthy playing space.  To meet the air quality goal, the center has an innovative system for growing its ‘own’ fresh air using the natural filtering potential of plants among other passive and organic functions. Outside air drawn in from the top of the environmental atrium is passed through a special collection of plants, a process where the double skin façade serves as the cavity for directing the air towards the bio-filter. These plants act as passive filters, absorbing carbon dioxide and releasing oxygen for increased oxygen levels. The plants and the bedding biomass also remove air contaminants to improve air quality. Then, the air flows through gabions placed under the facility where the thermal labyrinth heats and cools it. Finally, the bio-filtered and cooled air passes throughout the four floors of the center and then flows out of the building as relief air. Fan coil units play the role of trimming fresh air towards satisfying the building’s heating and cooling requirements (City of Sydney 2016; Gollings & Chung 2010; Mackenzie 2010). The process of growing ‘own’ fresh air and the supply air path is represented in Figure 14.

Figure 14: Intake air and supply air paths (Gollings & Chung 2010)

The biomimicry technique is summarized in Figure 15.

Figure 15: Biomimicry (Brady 2014)

The following are some of the major innovative design strategies regarding the external air intake and fresh air supply process (Mackenzie 2010):

  • The outside air intake point is located above the street level to minimize the challenge of contaminants from passersby and traffic from the three immediate streets.
  • The orientation of the air intake is the North/East direction to leverage the inherent thrust of the wind and overcome potential resistance that could otherwise require mechanical systems.
  • The fans used to drive further air filtration and bolster air supply rely on electricity, thus the PV panels and geothermal pumps help reduce the grid power requirement.
  • Plant equipment is cooled using the externally collected air as opposed to water.

The selection of building materials was based on their sustainability and durability. The building is constructed from a ‘post-tensioning structural system’ that minimizes the amount of concrete needed for structural support and/or framing. PVC alternatives were used for electrical and plumbing services to prevent potential release of harmful pollutants through offgassing (Brady 2014). The center’s finishes contain products made from materials with insignificant harm to the environment, with volatile organic compounds (VOCs) dominating the selection list. The timber components used in various parts of the building are sourced from sustainable trees (City of Sydney 2016).

4.2.2 Transparency as an architectural theme

The building has a shinny glass skin supported by a glass structure to form a transparent stopper. Other than playing the role of sustainability, the prismatic glass atrium drives a sense of multi-layered transparency architectural theme. Glass prisms are incorporated in series to further implement an open and transparent façade towards making various activities at the facility adequately visible and encouraging participation. The center’s social and ethical diversity is sensationally revealed from the southern edge of the Collins Street closure that was converted to an ordinary park for use by the public. This implies that people using the park can see the functions of the centre through the glass prism façade. The space extends the function of the Surry Hills Library and Community Centre and reasserts the building as a truly public place. The ‘U’ form is suspended above the ground, which further enhances accessibility and transparency (Gollings & Chung 2010).

As such, the building is designed to be overly transparent in that it exposes most of the things it accommodates within. This allows a friendly, inviting, and welcoming facility open to the public and truly accessible by all. The transparency theme also represents and embodies the community’s values and draw aspiration (Gollings & Chung 2010). 

5.0 A critical evaluation of the building

5.1 Structure and construction

The Surry Hills Library and Community Centre is a four-floor building. The floor plans adopted were informed by two major factors – how to site constraints and the best combination and diversification strategy regarding facility offerings.  Basically, the site is a small, constrained area bound by the Crown Street, Norton Street, and Collins Street. Floor plans are executed in a manner that fully exploits the compact and street-bound ’25 by 28’ meters site, and drive large impact despite the small scale of the site and the building.  Moreover, all these facilities are provided in a single building and place. The Collins Street close was redeveloped into a public park, which helped extend community contribution. Other than the need for maximum site exploitation, program diversity was also upheld. This is because the project is located at the core of Surry Hills, which is one of the major inner-city suburbs of Sydney inhabited by a community of different ages, cultural backgrounds, and incomes. Therefore, the floor plans are designed to host the following major facilities targeting the community: library and specialized learning areas, cafe, childcare, computer laboratory, meeting rooms, kitchens, and conference areas. The floors are also designed in a way that meet the needs and expectations of the diverse user groups – readers, learners, parents, children, and administrators and teachers (Brady 2014; Hardning 2010; Werner 2013).  Figure 16-19 shows the adopted floor plans together with facilities provided at each floor.

Figure 16-19: Lower ground floor (Hardning 2010)

Figure 17-19: Ground floor (Hardning 2010)

Figure 18-19: Level one (Hardning 2010)

Figure 19-19 Level two (Hardning 2010)

The main structure entails a pre-stressed concrete system and it is shown in Figure 20. The main frame remains unexposed except off-form concrete columns. The building adopts a disciplined and straightforward floor plan right from the footing and basement to the upper-most floor. This way, the building requires simple structural elements and transfer beams.

Figure 20: Structural analysis (Brady 2014)

The following are the three major elements present in the building: the environmental atrium, the U-shaped timber form, and the transitional foyer space. Construction materials and forms are highly sustainable, simple, and consistent to exhaustive scale. The combination and quantity of materials and components used for the building provides structural strength and aesthetics among other performance metrics such as sustainability (Blundell 2009).  The structure, systems, materials, and finishes (wooden and metallic) are finely considered to improve the facility’s structural performance. The interior design emphasizes the sense of openness and welcoming spaces due to strong transparency facilitated by initiatives such as the glass atrium and a series of active timber-based louvers that control daylight entry (Hardning 2010).

The building is a success from a structural, architectural, and sustainability perspective, which enabled it to win the Winner for Excellence in Construction Award – Public Buildings (2009) (Thorp 2013). Site size and location restrictions required a finely considered floor plan. The floor plans are relatively very tight but they provide the desired results in terms of facility utility and aesthetics.  This creates an opportunity for users to enjoy close relationships and interactions with each other. The proportional design components of the Surry Hills Library and Community Centre make the building to stand out in the inner-city town of Sydney – Surry Hills (Werner 2013).

5.2 Environmental strategy

The project was steered with a clear and strong symbol of the City’s brand of sustainability in its civic or public undertakings. The architects integrated the center’s custom-made elements with architectural characteristics to meet their target environmental function and contribute holistically to the urban character. In terms of performance, monitoring the building’s systems has demonstrated oxygen concentrations in the excess of 5% than average (Hardning 2010). This is a reasonably incredible result, but the whole value of this performance outcome can only be justified if sustained in the long-term. In fact, this has seen the center win a number of awards due to its sustainability excellence, for example, the National Award for Sustainable Architecture by the Australian Institute of Architects, Australian Timber Design Awards Public Building High Commendation, and the UDIA Excellence in Sustainable Design Award (Thorp 2013).

The building’s integrated ESD system has helped grow ‘own’ fresh air using specially chosen plans, biomass beds, environmental atrium, fans, and labyrinth as key elements. Air is cooled naturally under the facility, which reduces the cooling and ventilation requirements by close to 50% (City of Sydney 2016).

The rainwater collection, treatment, and storage tank has played an integral role in water conservation at the center. Recycling is also optimized for better utilization of water for flushing toilets and watering the environmental atrium plants, and the lawn. The rainwater system drives water savings in the excess of 620,000 liters (City of Sydney 2016).

The wellbeing of users and/or occupants was the fundamental focus of the center. The constrained construction site forced the design team to devise innovative strategies to utilize available space and meet the complex utility and user requirements. Passive, multifunctional, and hybrid systems were used in addition to experimenting with natural organisms to meet strict environmental sustainability requirements. Therefore, the building integrates a collection of systems that draw inspiration for bio-systems and other elements to enhance the external and internal environments and consequently user/occupant physiology (Mackenzie 2010).

5.3 Economy

The center cost approximately AU$13.8 million to build. The internal degree of finish is not extravagant as reported by some opponents who claim that the center has a lavish curtain (Hardning 2010).  ALTUS PAGE KIRKLAND, a project management company values the project at AU$19 million. The construction project was delivered under budget (ALTUS PAGE KIRKLAND n.d.). With a cost of approximately AU$240 per square foot primarily directed towards sustainability issues, the budget of the project may be considered justifiable ().

The building is very modern and ornate though it was delivered as an iconic architectural product that leverages multiple ESD initiatives to minimize energy and cost wastages, enhance indoor air quality, and promote improved occupant experiences when using the facility.  At the same time, the building does not put considerable pressure on the immediate Sydney’s environment since it is associated with zero greenhouse gas emissions (Thorp 2013).

In the global context, local, state, and national governments are putting a lot of effort to defeat climate change and its consequences. Green or sustainable construction plays a key role in tackling climate change by minimizing overdependence on grid power in favor or renewal sources of energy, promoting energy savings through techniques such as maximized day lighting, and increasing thermal insulation to reduce the energy consumed by buildings (Blundell 2009). Such sustainable initiatives have seen the Surry Hills Library and Community Centre minimize cooling and ventilation requirements by approximately 50% and water savings by close to 620,000 liters (City of Sydney 2016).

5.4 Occupant amenity

The lower ground floor (basement) hosts the main library, history study room, IT and internet access computer lab, and magazine and newspaper reading area. The ground floor also provides a number of library collections, loans desk, and children space. The lower ground floor and the ground floor contain a diverse collection of close to 30,000 items and several public PCs. The level one (or first) floor hosts a lobby area and a series of kitchens and community rooms that alternately engage.  Local businesses may rent the community rooms to host meetings for up to 125 people (Gollings & Chung 2010). In addition, computer, language, and cooking classes may be held in the function room and kitchens. The kitchens are also used to stream cooking classes to various centers via the internet. Half of the level two (or second) provides a series of miniature handprints and impressions in addition to scattered furniture as a sign of the childcare space. This space can accommodate 26 children. This part of the level two floor also provides a secure lobby, bathrooms, staff room, and heat and serve kitchen. The other half is an open-air playing space complete with fastidious details of sandpit, patterned soft-fall area, and automatic shade roofing to support kid activities (Gollings & Chung 2010; Hardning 2010). The center’s amenities are provided in highly oxygenated, healthy, appealing, transparent, and accessible spaces (Thorp 2013).

5.5 Construction best practice

5.5.1 Focus on early and comprehensive planning

The Surry Hills Library and Community Centre construction project was adequately planned during the early stages, which helped achieve accurate cost and time estimates. Consequently, the project was completed within the stipulated time and budget (ALTUS PAGE KIRKLAND n.d.). The performance in terms of timeline and budget estimates can be considered excellent since most public construction projects suffer extensive delays and cost overruns. Considerable time and budget overruns may inhibit the value associated with a project (Aliverdi, Naeni & Salehipour 2013).

5.5.2 Value management and community involvement

In addition, the project scope and quality were sufficiently upheld because of early consultations and planning. The active and vocal local community was involved in close consultations during the project brief development process. These discussions helped understand what the community wanted, a building that everybody could share as opposed to a mere library, community, and childcare facility. It also informed the decision to integrate these facilities into a single building and place. This led to a flawless handover to the client and the user community – City of Sydney Council and the public respectively. Moreover, the facility was delivered as a really shared place suitable for the entire community to visit and use in a variety of ways. Also important is the fact that the building represented and reflected the local community’s values, a key enabler of acceptance (Gollings & Chung 2010). This way, the local community has strongly embraced the center since it was opened. It has become an inviting public place for people from all social and age groups since it offers facilities that symbolize the values of accessibility, transparency, and equity in relation to information and other resources critical to community empowerment. 

5.5.3 Focus on sustainability

Sustainability is a key consideration in the construction industry particularly because of the undeniable risk facing the world today – resource scarcity, climate change, and environmental degradation (Hardning 2010). A considerable part of the construction budget was channeled into sustainability, one of the main construction best practices today. All the architectural finishes are made of materials with low levels of VOC and toxicity, which compliments the process of growing ‘own’ fresh air and supplying it throughout the building. Additionally, there is extensive use of specially sourced timber to further facilitate sustainability. Sustainability is important in construction projects intended for public or community use (Mackenzie 2010).

5.5.3 Application of public building design principles Diversity

The building is optimized for use by multiple user groups (readers and learners, parents and children, and administrators and teachers) and pragmatic and artistic contributions (use of minimal resources and thoughtful materiality). The design is inspiring and innovative and has comfortable interiors to be appealing to all age and cultural groups and subsequently promote everyday usage. Transparency

The center is designed to be highly accessibility and revealing – viewing in and out in addition to contained indoor and outdoor spaces. Sustainability

There is an integrated sustainability system comprising active facades (louvers), PV panels, BMS, day lighting, geothermal power, bio-filtering plants, passive air cooling and ventilation system, labyrinth, and atrium among other elements. Monumentality

The following major initiatives are taken to achieve monumentality, which is critical in public architecture:

  • Unique composition of key building features: the orienting ‘U’ timber form, mediating lobby foyer, and the transparent glass atrium.
  • Small-scale building but with huge impact.
  • Extended public/civic function by constructing the public park at Collins Street edge.

6.0 Conclusion

Evidently, the Surry Hills Library and Community Centre reasserts the Sydney’s City Council practice of open expression of public architecture. It incorporates a wide array of architectural elements to form iconic building in Sydney. The site context can be summarized by diverse commercial and residential mix in addition to a predominant Victorian architectural style. The center represents one of the most valuable and exciting facilities to Sydney residents since it provides highly accessible and diverse library, community, kitchen, reading, and childcare services all in a single building and place. It embodies and reflects the community values, while upholding immense transparency to attract and welcome the public. There is also a modest park for the public that extends the center’s community service function. Therefore, the building performs optimally in terms of supporting the diverse needs of all Sydney residents in addition to structural resilience.

The building implements a number of sustainability initiatives that leverage an integrated ESD system that comprises of timber active and passive technologies. The air filtration, tempering, and supply system is analogous to the human respiratory system. Air from the streets is drawn into the atrium, passively filtered through bio-filtration, conditioned using the labyrinth, and supplied throughout the building. Other than this, there is the rainwater conservation system, PV panels and geothermal pores, low VOC materials, BMS, and automated louvers and shading, and sensors to automate the process the process of turning lights off and on. These initiatives helps create a healthy indoor environment because of improved air quality. In addition, the building drives high energy savings and low greenhouse gas emissions. Further studies are necessary to further quantify the sustainability benefits, especially indoor air quality and associated health implications.


Aliverdi, R, Naeni, LM, & Salehipour, A 2013, ‘Monitoring project duration and cost in a construction project by applying statistical quality control charts’, International Journal of Project Management, vol. 31, no. 3, pp.411-423.

Thorp, FJM 2013, The Surry Hills Library and Community Centre, [Online], Available from: <> [Accessed 27 April 2017]

 Werner, AJ 2013, Library buildings around the world, Univ.-Bibliothek Frankfurt am Main.

Beciri, D 2010, Green architecture – Surry Hills Library and Community Centre, [Online], Available from: <> [Accessed 27 April 2017]

Brady, C 2014, Surry Hills Precedent Study, [Online], Available from: <> [Accessed 24 April 2017]

Hardning, L 2010, ‘Surry Hills Library and Community Centre’, Architecture Australia, vol. 99, no. 2, <> [Accessed 24 April 2017]

ALTUS PAGE KIRKLAND n.d., Public Sector – Surry Hills Library & Community Centre


Gollings, J & Chung, A 2010, ‘Surry Hills Library and Community Centre / FJMT’ [Online], archdaily, 25 April, Available from: <> [Accessed 25 April 2017]

Mackenzie, D 2010, ‘Surry Hills Library and Community Centre — so how does it work?’, Architecture Australia, vol. 99, no. 2, 10 March, Available from: <> [Accessed 26 April 2017]

City of Sydney 2016, Surry Hills Library and Community Centre – City of Sydney, [Online], Available from: <> [Accessed 25 April 2017]

Loonen, RC, Trcka, M, Cóstola, D, & Hensen, JL 2013, ‘Climate adaptive building shells: State-of-the-art and future challenges’, Renewable and Sustainable Energy Reviews, vol. 25, no. 2013, pp.483–493.

Meinhold, B 2010, Australia’s Surry Hills Library Sets New Standard for Green Design, [Online], Available from: <> [Accessed 25 April 2017]

Blundell, L 2009, Surry Hills Community Centre raises the bar for public buildings, [Online], Available from: <> [Accessed 25 April 2017]

Nyren, R 2011, ULX: Energy-Saving Civic Buildings, [Online], Available from: <> [Accessed 28 April 2017]

Narrative Account

Narrative Account

Symposiums were an essential component of the ancient Greek social practice. The main room for men, known as the andron in Greek language, hosted the party, which was comprised of two parts.  A meal was served in the first segment, while the second part involved drinking of diluted wine, conversations, speeches and songs (Allen 205). The andron consisted of squarely arranged couches to ease conversation between freeborn males, who were the guests in the symposiums. After the meal, the guests were cleaned and perfumed the slaves of the attendants before tasting the unmixed wine. One party participant was appointed the ‘symposiarch’ to determine, in consultation with the rest of the party members, the amount of wine to be served and its concentration. The Agathon’s tragedy triumph party, as described by Plato, is one of the typical ancient symposiums, which reveals the guests’ influence to the nature of the party, as well as the narrator’s effect.   

The nature and the social status of the guests at the symposium immensely determined the manner in which the party faired including the amount and concentration of the wine served, as well as the conversations and speeches, among other elements of the event. Socrates among other respected personalities, such as Alcibiades, Eryximachus, Phraedrous and Aristodemous had converged at Agathon’s house to celebrate with him upon winning the first prize in the Lenaean festival that took place on the previous day of the party date (Candiotto 24). Clearly, most of the guests present were renowned intellectuals of the time that included poets, doctors and politicians, and commanded immense respect in the society. Thus, the Plato’s symposium was characterized by speeches, conversations and moderate wine drinking, and an immense philosophical appeal because of the Socrates’s compelling influence. 

In ancient Greek tradition, symposiums were highly important events that were organized for entertainment. Prior to the event, Socrates bathed and wore sandals, a thing he rarely did, to signify the importance of the party that he was about to attend that evening. However, such parties largely included a dramatic aspect comprising of poetry, games, songs and comic discussions, among others, and moderate or reduced conversations of immense seriousness between the guests. Notably, in the Plato’s symposium, as narrated by Apollondorus, serious discussions and speeches hijacked the dramatic nature of the party and largely involved a competition of ideologies between the guests (Luz 16). Clearly, Socrates attendance drifted the event from an entertainment and celebration orientation to a forum for enlightened exchange of ideas and competition for influence.

Socrates and philosophy occupied a prestigious position in the society at the time of the Plato’s symposium hence directly or indirectly provided definitions of enlightenment by setting implicit or explicit rules of engaging a wide range of issues. For instance, at the time of invitation to the party, Agathon insisted that the presence of elites such as Socrates at the party would make it one of the most spectacular of all symposiums. However, the philosopher and his contemporaries never attended such gathering because their playful nature and their reduced importance. Socrates decided to attend the symposium after the host expressed immense sadness because of the failure to secure their attendance (Allen 204). Therefore, the social status of Socrates and the rest of the elite is revealed by Agathon’s enthusiasm regarding their participation and their ability to increase the value of the less important.

The impact of the Socrates started to be felt immediately he arrived at the party when Agathon invites the philosopher to share a couch with him to benefit from his great wisdom. However, the philosopher replies the host in a mockery style by indicating that if wisdom could freely flow from the more to the lesser wise, he would be the biggest beneficiary of the two. The satirical statement and the increased respect that Socrates commanded from some of the present guests, such as Phraedrous, Alcibiades and Aristodemous established a robust philosophical authority over drama, which formed the foundation of the rest of the night’s activity. For instance, Agathon, after being cornered by the philosopher’s wisdom suggested that they should engage in a philosophical competition later in the night hence set the stage for the speeches and conversations that followed. In addition, the suggestion to reduce the amount of wine taken, by Eryximachus, a doctor by profession, because of the increased effects of the hangover from previous night drinking on all the guests except Socrates, who took moderate amounts of wine, seem to have been influenced by the philosopher. Furthermore, the guests’ decision to send away the female expected to provide entertainment seemed to have been influenced by the Socrates’ reduced attraction to physical love and his preference of philosophical reasoning. Thus, the philosopher directly determined the event by changing or setting the rules of definition, as well as well as indirectly through immense respect that he commanded and the statements he made.                  

The Plato’s symposium was narrated by Apollondorus to an anonymous companion, who was a rich businessman. Apollondorus remembers the accounts of the party he had given to Glaucon (the Republic’s main interlocutor and a half-brother to Plato) who had scanty details of the event from unreliable sources. Apollondorus heard the story from Aristodemus, one of the symposium’s guests, and counter checked some of the facts from Socrates. Aristodemus, the first level narrator, was a great admirer of his friend Socrates, who invited him to the party, hence his version of the narrative was maybe largely philosophical to glorify the great philosopher (Socrates). In addition, the Socrates’s confirmation of facts may have resulted to the philosophical aspect of the symposium. Furthermore, Plato was a protégé of Socrates hence may have deliberately picked the philosophical aspect and ignored the rest of the symposium’s elements to portray an increased value of his discipline. Thus, the narrators possessed immense control of the narrative, which probably they used to develop a perspective of the symposium aimed at an ulterior motive.

Some of the narrators’ effect is manifested in the supremacy of philosophy over other disciplines, which is evident throughout the narrative. For instance, the image of the Socrates portrays a very wise man who easily outshined every guest in the symposium, as well as was immensely praised by a big number of guests present. In addition, the philosopher pointed and rectified errors in the speeches of the guests who spoke before him hence seemed to offer an undisputable reference of the correct and the wrong. Furthermore, while offering his speech, Socrates referred his arguments from a female character known as Diotima, who he claimed that gave him the wisdom about love, which he in turn was passing to the guests in style that virtually made the woman one of them (Candiotto 32). Notably, symposium guests were exclusively freeborn males and women participation in the events was reserved for entertainment and under no any circumstance a woman was supposed to give wisdom to men. Thus, the Socrates’s decision to indirectly include a woman guest expresses the authority that he commanded, and may have been deliberately inserted by the narrators to portray the importance of philosophy and philosophers in the Greek society of the time.

Some scholars argue that the Plato’s symposium is to some extent fictional and most of the characters in the party were an imagination of the philosopher. Specifically, Plato’s description of the Socrates changes from a person in his early age literature towards a fictional character that he used as his mouthpiece, in the philosopher’s middle and later dialogues. Therefore, the figure of Socrates in the Plato’s Symposium, which is one of his middle age dialogue, may have been exaggerated to fit the desires of the philosopher.

The Kylix, a black footed cup created in c. 500 BCE, which is the approximate time that the Plato’s symposium took place, reflects the event as described by the philosopher. The interior part of the vase depicts vines with grapes on the outer band and a scene of a symposium around the middle strip, as well as gorgon’s head at the core of the cup. In addition, the painting on the vase includes a slave serving wine to the symposium’s guests. The Gorgon, which could turn a person into a stone, is placed at the center of the phase to show the seriousness of the matters handled in the Plato’s symposium, while excluding other participants, such as women who graced events of that kind, reveals the transformation that resulted from Socrates and other elite’s attendance (Luz 21). Thus, Plato may have used the Kylix to develop an imaginary symposium, which represents his views on the events.

Symposiums’ guests and the narrator of their occurrence can immensely shape their nature. For instance, Socrates’s attendance completely changed Agathon’s party, which was organized to celebrate his first tragedy prize. The philosopher drifted the event from a dramatic orientation to a philosophical discussion, which involved serious conversations, using immense respect that he commended and well thought statements. On the other hand, the narrators may have distorted the real appearance of the event to portray philosopher as more important compared the rest of the disciplines. Aristodemus, Apollondorus and Plato, were some of the greatest admirers of the Socrates and Philosophy hence may have deliberately represented a philosophical perspective of the meeting to make the disciplines look more appealing compared to others, such as medicine and poetry.

Works Cited

Allen, Sarah. “Plato And Levinas”. Symposium, vol 14, no. 2, 2010, pp. 202-206. Philosophy Documentation Center, doi:10.5840/symposium201014229.

Candiotto, Laura. “Review of Cooksey T. L., Plato’s Symposium: A Reader Guide.” Plato Journal, no. 12, 2012.

Luz, Menahem. “The Rejected Versions In Plato’S Symposium”. Plato Journal, vol 14, 2014, pp. 9-22. Coimbra University Press, doi:10.14195/2183-4105_14_1.

The Morning that Roars-Renewed Patriotism in America-NONFICTION

The Morning that Roars-Renewed Patriotism in America-NONFICTION

            In a long spell of dwindling commitment to the unity and the wellbeing of the nation from the leaders and the citizens as well, the few existing nationalists were concerned about the future. Truly, they had a reason to be worried because nobody seemed to possess an explicit solution to the problem, which was threatening to consume the nation. Nonetheless, little did the patriots understand that a single morning was more than adequate to completely solve the conundrum but not without a huge cost attached. A spirit of patriotism humongous enough to equal that of George Washington and the rest of the founding fathers of the nations and a sense of unity comparable to that experienced after the end of the civil war were renewed in the heart of every citizen of the United States of America. Undoubtedly, the terror that threatened to tear the nation apart was a compelling wake up call to the American citizenry to stand up and be patriotic once again.

          September 11, 2001, the center stage of drama was Manhattan, New York City, the home to the twin towers of the world trade center. Standing at a height of about four hundred and fifteen meters, the towers were the tallest structures in the city hence served a wide range of purposes. Thousands of people regarded the building as their work place, hundreds of different people also convened there daily for conferences and meetings, while few others visited the structure to capture a an aerial view of the entire city. In addition to the essential purposes, the building played another equally important but unrecognized role to the millions of residents and visitors, which involved assurance. A person in the city could look up to the building and indeed remember he/she was in New York, and proceed with rest of his/her business with pride. However, things were about to change towards the worst or evil, as many regard the horror that hit the city that morning.

          The sky was blue and the weather was moderately windy, while the day business was unnoticeably unfolding as usual (because New York is a twenty four hour economy) in the morning hours, when two thunder like sounds roared in the entire city. Two airplanes, allegedly under the control of al-Qaida affiliated hijackers had hit the twin towers in quick succession. The sounds were followed by a huge cloud of smoke and wailing sounds of people trapped in the building mixed with that of the resulting fire, as well as that of the collapsing structure. Objects, which included people jumping off the building to escape the conflagration and smoke, were falling from the skies at a supersonic speed. A few minutes later, the surrounding area was full of dust, smoke and wrecked cars, as well as other categories of damaged property. On the streets, people were running for their lives in all directions and in immense fear and confusion. In the air, military and other security planes were hovering around, while any non-military plane was ordered to land. It seemed that New York City was down and America had been conquered by the heinous act of terror.

           Events that followed the incidence however revealed the unexpected or at least something that was fading away, and this was the American spirit. For instance, the New York City guard had already initiated rescue mission even before the rubble had ceased falling from the building. Later, other cities and states’ rescue department trickled in to assist in the rescue efforts, while all security personnel on leave or off duty were ordered to immediately resume work. Political leaders were everywhere on the scene and on media sending messages of revenge to the terrorists, while trying to inspire hope to the Americans citizen. Particularly, Hillary Clinton, the then senator of New York and a former first lady, was on site and had no kind words for the terrorists. Clinton promised the terrorists that America never forgets and revenge was inevitable in future.

         The US government through the politicians and its agency responded swiftly and excellently to the occurrence. However, the manner in which ordinary citizens reacted to the ordeal was a clear revelation of a renewed and immense love for the nation. For instance, thousands of New York residents lined up on the street to cheer up the rescue personnel for a number of days after the occurrence not mention hundreds of others who volunteered in many aspects of the mission to save as many lives as possible and help family members of the victims cope with the situation. Journalists and other media personnel spent days and nights to give updates to the rest of the country that could not physically access the scene. Business people and other eminent personnel also poured support in terms of donations in humongous amounts. Notably, every citizen in the country was willing to volunteer in any manner possible. In addition to all the physical and emotional support displayed by the citizens, patriotism strongly stood out of the rest. American flags were erected everywhere from the scene of the terror to the streets of New York to signify defiance and love for the country. Clearly, the intention of the terrorists had been defeated.

        The dwindling spirit of patriotism was rejuvenated a hundred folds by an action that was meant to further suppress it. Conventional slogans of nationhood and unity, such as ‘America first’, and ‘One Nation under God’ were numerously recited by people ranging from politicians and government workers to the ordinary citizens. Later, another bigger, better and more sophisticated tower along with a memorial park was erected in the place of the iconic twin towers. In addition, a memorial ceremony is conducted every year without fail even after a considerably long period when the al-Qaida leader Osama bin laden was killed by a United States military mission in Pakistan. Although, the rescue missions that started immediately after the incidence were later to confirm that about two thousand and six hundred people perished and scores of others injured in the ordeal, the people of the United States sent a clear message of unity and patriotism to the world. After the incidence, the nation gathered physical, emotional and spiritual support enhanced their recovery sooner than expected by many but as promised by Clinton, America never forgets. Certainly, to the victims and witnesses of the now famously known as the 9/11 ordeal, the terror and havoc of that morning still roars in their heart up-to-date but the American spirit of defiance and patriotism still prevails in a compelling manner.       

The Uncontrollable Machines

The Uncontrollable Machines

            Technologic advancements have been instrumental to the global economic growth for many decades or centuries. Specifically, the computer has now become the cornerstone of every economy in the world with multinational corporations, small and medium enterprises, not for profit and for profit institutions as well as the government have immensely embraced computer technology. Notably, computer technology has enabled many businesses to cut costs and hence improve profitability. However, the recent past changes in technology appear dangerous to the existence of humanity. For example, the idea of Artificial intelligence involves creation machines that can learn and respond to the environmental demands without the help of human beings. The concept of AI combined with Internet of Things hence involving the challenges associated with big data can lead to machines that become super intelligence beyond the control of human beings. Thus, the inception of highly sophisticated concepts such as Artificial Intelligence or the Internet of Things the previously highly beneficial technologic advancement might become detrimental to the world by creating irreversible concepts.

           The business world is one of the fields that needs computer advancements to cut essential costs or reduce wastage, among other uses hence increase the amounts of profits realized from transactions. In addition, non-government institutions and governmental agencies are required to deliver improved services: hence they have increase efficiency. Thus, such institutions heavily use computer technology in production, managing inventories and marketing, among others purposes. In this regard, most of the business and not-for profit institutions use some of the most sophisticated systems to ensure that they can perform their roles as expected. For instance, many business organizations possess inventory management systems, which can establish the need to order or produce a certain item when the inventory levels fall below a certain points. In addition, many of the governments in the world have implemented e- platforms to ensure to improve citizens’ accessibility. Therefore, the business world as well as governments have been able to realize immense success emanating from implementing innovative computing concepts.

          Concepts such as artificial intelligence have found immense application in medicine, finance, manufacturing, and security sectors because of their increased potential to cut expenses, as well as improve efficiency and efficacy. For instance, use of wielding robots in a motor factory highly increases the speed of production and significantly reduces the margin of error. In addition, Artificial Intelligence enables firms to enjoy improved reliability, because unlike human beings, machines do not fall sick neither do they become angry hence any time wastage form such events can be avoided. Thus in the recent past, the idea of artificial intelligence has gained increased popularity in the business and governmental realms especially with the inception of Internet of Things.

          Currently, the intelligent machines are used in fields such as education, military, manufacturing, management, risk analysis and mitigation. In education, computers that mimic human beings using online information hosted online have started replacing the traditional teacher. On the other hand, complex military operations are analyzed and conducted using the help of high level intelligent computers. For instance, drones are largely utilized in the modern world front. Notably, a small level of inaccuracy can lead to detrimental impacts to humanity For example, a drone that releases its weapons before reaching the targeted areas can lead to loss of innocent lives In addition, an inappropriately programmed robot can expose unwanted content such as pornographic materials to minors.  Thus, such excessive reliance of machines is considered requires immense control of humans to ensure that the expected results can be realized.

          The recent introduction of internet of things to the concept of artificial intelligence has complicated the issue of human machine control and introduced a wide range of inaccuracies. Internet of Thing (IoT) involves an expansive network of equipment and intelligent computers to create machines that can interact with the environment without the help of human beings. For instance, the technology, which is at its inception stages is expected to support the creation of driverless vehicles that will operate in a similar manner as the conventional auto mobiles. In addition, the vehicle will be able to coordinate its activities with other devices in the network. For example, the ringing of the alarm in the morning, could start the vehicle at the parking lot in preparation to take off after fifteen minutes may be if that is the routine of the owner. Notably, such an idea can be phenomenal to the business world because of its ability to immensely reduce operational costs, as well as minimize time wastage. However, the concept of internet of things introduces the problem of Big Data, which is highly difficult to manage because of sampling. The concept of Internet of Things as noted above involves connection of an increased number of machines and equipment, which generate increasingly large amounts of data. Thus, man must analyze such data to establish the manner in which are leaning and communicating in the network. 

            Analysis of large volumes of data is currently becoming a challenge because of the difficulties in sampling. It is extremely difficult to identify the best tools for data analysis when huge volumes are involved because the existing tools of data analysis possess a limitation in the sample size they can take, which are inadequate to provide significant information about the amount of data expected data to be generated by IoT. In this regard, at some point man is expected to lose control of intelligent computers. Thus, in future, the behavior of intelligent machines remains unpredictable and many think that such machines will start working against humanity.

           Loss of control to intelligent machines by humanity is thought to be one of the most dangerous things to happen to humanity. Currently, Artificial Intelligence is applied in most many fields including the mission critical areas. For instance, most of the nuclear weapon technology are highly automated with high level of automation starting from manufacturing to operation. In this regard, machines beyond the control of humanity can introduce bugs in the manufacturing process or operation hence lead to a wide range of dangerous outcome, such as loss of innocent lives. In addition, the example of automated vehicles highlighted above can lead to unavoidable accidents if control is to be totally lost. Thus, loss of control to human beings is expected to result to a wide range of devastating impacts to humanity in future.

          One of the expected phenomenon in future regards increase of machines that will require immense resources to produce and operate hence pose massive threat to humanity. For instance, it is expected that intelligent machines will learn to create more intelligent machines. Thus, without the control of smart machines, human beings will not be able to regulate the manner in which such devices create others in terms of errors and population. In this regard, machines will start competing with human beings for essential resources, such as minerals that are highly important in the creation of such devices and to human beings as well. In addition, such machines will become too many such that they start competing with human beings for space. Furthermore, some machines will start using essential resources, such as oxygen hence pose a direct threat to humanity. Therefore, in future, man will face extinction at the expense of smart devices that will run out of his control.

             In future, machines that will directly attack human beings are also expected to appear. For instance, intelligent machines controlling the production of chemical for consumption in factory will may because of bugs start using the wrong formulas, which is expected to lead to poisonous chemicals. In addition, a lot of people are expected to die from security dedicated intelligent machines that will start identifying innocent persons as enemies hence attack them. It is also expected that some smart machines will start releasing lethal materials, such as gamma rays, to the environment, which will cause tremendous harm to human beings. Thus, the future of humanity is at high risk of extinction, because of attack from smart machines.

       A combination of direct attacks and competition of essential resources among other risks from out of control intelligent machines will pose immense danger to the existence of the human race. In this regard, it is vital for man to adopt technological advancement with a lot of caution or else they will become a force to reckon with in future. Concepts such as artificial intelligence and internet of things should be controlled with stringent international policies in the same manner as to that of nuclear weapons. In addition, experts in the field of computers should start exploring efficient means of analyzing Big Data to acquire meaningful insights about what might transpire in future. Failure to adopt such kind of measures leaves mankind highly exposed to extinction in future considering the current heavy adoption of artificial of intelligence.             

Weight Training

Weight Training

Baker, J. S., et al. “Strength and body composition changes in recreationally strength-trained individuals: comparison of one versus three sets resistance-training programmes.” BioMed research international 2013 (2013).

This is a research paper documenting the findings of a study conducted to verify the impacts of increasing the volume of weight training from single to three sets on muscular strength and body composition. Clearly, the study identifies an area of conflict, which involves the benefits of increasing the volume weight in training with regard to body composition and strength. However, the scope of the study seems inadequate or shallow because it fails to capture critical elements of weight training. For instance, the study squarely focuses on the impacts of volume to body composition and strength but does not include other controversial areas of weight training, such as the impacts of increasing the training intensity, using varying volume or the appropriate balance between the two aspects of training, on the trainer. Therefore, this study compromises the fundamental principle of generality hence is inappropriate to satisfactorily inform the entire spectrum of weight training.

The fact that intensity and volume are inversely proportional leads to a wide range of unanswered questions especially regarding the effects of increasing volume or intensity on body strength and composition, as well as the best correct combination between the two. However, the issue of definitions in this subject generates immense controversy such that a myriad of studies in this area have undertook varying focuses hence led to a wide range of inferences, which cannot properly respond to the demands of weight training field. Thus, the narrow scope adopted in the study can be justified by the increased amount of controversy in the area of weight training.

Notably, the study excellently responds to the identified research gap, however much narrow, by concluding that increasing volume does not have tangible benefits to body strength and composition. In addition, the paper apparently articulates the procedure, and elements, as well as assumptions used to facilitate the study, which enhance a logical flow of the study. For instance, the article includes the sample sizes and categories utilized, and a detailed justification of their inclusion in the study. Therefore, the findings and conclusions can be utilized, with extreme caution, in the area regarding volume in weight training and its impacts on strength and composition.  

A Comparative Analysis Essay

A Comparative Analysis Essay

              In most cases, playwrights develop plays with a specific motive, such as entertainment, education, sensitization or activism, and sometimes the motivation may involve a combination of two or agendas. Customarily, the motive of writing is ulterior hence writers articulate it using literary elements, such as themes, characters and tone, as well as techniques that include figurative language and irony and the audience generates satisfaction by unravelling the intended purpose of a play. Notably, literary elements and techniques are generally applicable in plays, as well as other forms of literature hence coincidentally pieces can exhibit immense similarity. Nonetheless, the ulterior purpose of or techniques and elements utilized in a play make it unique from the rest unless the playwrights adequately interacted to willingly or unwillingly share thoughts before writing their pieces, or one writer deliberately copies previous work of another author. Therefore, plays, such as ‘Trifle’ by Susan Glaspell and ‘Poof byLynn Nottage, may portray immense similarities in plot, and or literary elements and techniques utilized but the primary message that can be derived from each of them is completely different.

        In the Trifle, Glaspell presents a story of a woman, Mrs. Wright, who had murdered her husband because she thought that he was the cause of her sad and tedious life. Mrs. Wright’s becomes a complex conundrum to solve for the sheriff, the county attorney, as well as the general public. However, two women, Mrs. Hale and Mrs. Peters, unravel the mystery behind the murder by interpreting and connecting complex clues left by Mrs. Wright before and after the act, while the sheriff and the county attorney were pursuing different strategies to understand the crime. On the other hand, the Poof by Nottage involves a woman Loureen, whose abusive husband, Samuel, dies after she wishes him death. The confusion that follows after the event prompts to call, Florence, neighbor and partner in crime to seek advice regarding the manner she supposed to handle the matter. However, Florence reveals that the two were working on similar strategy concerning on the manner to handle their abusive husbands only for Loureen to prematurely utilize it on her partner. The above description of the two plays reveal a wide range of similarities.

       Trifle by Susan Glaspell and Poof byLynn Nottage have a compelling similarity in the plot and themes utilized to communicate different messages to the audience. The plays revolve around women, who had murdered their abusive husbands, as well as others who help in articulating the story. In Triffle, Mrs. Wright had murdered her husband, while Mrs. Hale and Mrs. Peters carefully connects symbols to unwrap the cause of the woman’s action hence deliver the playwrights message to the audience. In a similar manner, when Loureen indirectly kills her partner, in Poof, she calls Florence to help her on the manner to handle the occurrence and their exchanges reveals the writers main agenda. Notably, the two pieces clearly articulate a theme of gender equality by indicating that women can handle complex issues, such as death, and placing the abusive males on the receiving end of the women’s actions. Thus, the plays share a storyline, which include dead males and active females, as well as utilize the theme of gender equality to construct their primary meaning.

         A number of differences are notable in the Trifle and Poof, despite the plays possessing immense similarities, which define uniqueness in each of the plays. For instance, the depth of the message, which requires a structured explanation from two women, in Trifle is greater compared to that in Poof that involves Loureen and her friend to articulate without in-depth explanations. In addition, Trifle’s primary message seems to be the equality between men and women and the consequences of mistreating them, as well as the central role females should play in matters regarding gender. Females involved in the play are glorified and they seem to handle complicated matters, while men are either abusive or incompetent especially in matters regarding women. However, the core message in Poof regard the consequences of rushing to act without in depth thought, which are evidenced by the confusion and turmoil that follows the death of Loureen. Thus, the plays possess varying primary messages despite them having similar plot and themes hence are unique from each other.

        Clearly, the Poof and Trifle possess immense similarities especially concerning literary elements and techniques utilized by the playwrights of the pieces. Notably, the theme of gender equality and the plot of the stories reveal immense similarities that might lead to the conclusion that one of the writers copied from the other. In addition, the two pieces heavily involve women characters in articulating their core message to the audience, while the men involved are either incompetent or abusive. Nevertheless, primary objective of Glaspell, which seems to be focused on the equality of female to male, is completely different to that Nottage that is centered on thinking before acting. Therefore, Nottage by articulating a different message, proves that her work is not a revision of Glaspell’s play, despite utilizing similar literary elements and techniques hence the similarities might be coincidental.              

Training Needs Analysis

Training Needs Analysis


              The current work environment demands workers to be equipped with immense skills on conducting complex assignments in a cost effective, safe and efficient way. Notably, the difference between the expected and the actual level of job fulfilment is utilized to identify a requirement for training. In this regard, data sources, such as the labor inventory, budgets, as well as the organizational climate indicators are utilized to evaluate the organizational gap hence the need for a training (Donovan & Townsend, 2015). Once the need of a training is confirmed, a comprehensive analysis is conducted at the person, operational and organizational level to establish the specific requirements it should address hence satisfactorily respond to the organization gap. Thus, a need analysis is the initial step of the training instructional design that immensely enhances optimum use of resources.       

Triggering Events

          The organizational performance gap at Apex Limited is one of the events that triggered the training need analysis, which was conducted with a primary objective of examining the factors that contributed to the difference in actual and the expected performance. In this regard, the TNA was designed to establish the specific requirements to be addressed by the training. Notably, some of the specific needs that triggered the TNA at Apex Limited include the high levels of employee turnover in the last quarter. In this regard, low worker motivation and a series of resignations indicated that there was a need for the organization to establish the underlying causes of such events. In addition, previous high costs and reduced effectiveness of training, as well as increased legal expenses emanating from former staff and employees immensely informed the decision to conduct a comprehensive TNA. Thus, the identification of the performance gap is the main event that triggered the TNA, which was a reactive approach of addressing issues because it was informed by an existing problem.

Data Sources

              Apex limited documents, such as employee surveys, work samples, performance measures, benchmarking studies, organizational budgets, labor inventory, and employee appraisals, as well as the goals and objectives of the company were examined to identify the probable causes of the OPG. Specifically, documents suspected to provide information regarding the causes of the current low actual performance were analyzed in comparison with respective expected output. For instance employee surveys were used to generate information concerning the organizational climate factors, such as motivation. In addition, labor inventory records were examined to establish whether the current level of investment matches the expected output or any imbalances that may lead to an inefficient workforce.  Furthermore, employees’ appraisals, work samples and benchmarking studies were used to gather information concerning the differences between the actual and the expected output for individual workers.  Finally, previous training related records, as well as other documents with relevant performance measures information were considered in the analysis to verify the ability to meet the current expectations.   

Levels of analysis

         In the business world, there exists a wide range of needs assessments that can be utilized in varying employment contexts (Kotlar et al., 2018). Notably, organizational, operational, and person level analysis lead to considerably high amount of essential information regarding the training needs.

           Organizational Level Analysis. At Apex limited, an organizational level analysis was conducted on the reasons, such as business needs that led to the desire for training. Particularly, the assessment involved an examination of the organization goals and objectives, as well as strategies and was aimed at verifying their validity. In this regard, factors that informed the need for training were examined. For instance, the assessment focused on the history of employee training in the organization, the need for training decision makers, and factors that made training the best solution to the OPG. In this regard, a document analysis approach of information gathering was utilized to acquire the necessary information from organizational documents, such as mission and vision statements, and previous training records.  

        Operational Level Analysis. At this level, the evaluation concentrated on task and employee analysis. Task analysis included assessment of duties and skill level required for accomplishment of particular tasks in the organization (Roughton & Crutchfield, 2016). In addition, the assessment involved identification of the employee performance gap and the possibility of the training reducing or eliminating it. An observation technique was utilized to collect relevant information at this level of analysis. For instance, relevant job descriptions were evaluated to establish their ability to enhance the achievement of the overall organizational goals and objectives. Thus, performance level analysis enhanced an in-depth understanding of specific jobs in the organization hence enable development of training with adequate and relevant links to the content of particular tasks.   

         Person Level Analysis. Analysis at the person level involved the identification of the potential instructors and participants of the training. Specifically, this phase of evaluation focused on establishing the participants, skills and knowledge level, as well as their learning styles. In this regard, interactive techniques of information gathering were utilized to enhance the accuracy of information collected. For instance, open-ended interviews were utilized to collect the general information, such as the most popular learning style among the targeted groups of workers, while surveys were used to collect employee-specific data.

Relationship between Information Gathered and OPG

           Information gathered from analysis conducted above elicited highly relevant information to enhance the design of a training program, with immense potential to reduce the OPG. For instance, the person phase analysis provided essential employee specific data that can immensely enhance the evaluation of skills and knowledge they possess, as well as their performance in the job capacities identified at the performance level of analysis. Similarly, information about the employee performance with regard to the job descriptions is very essential in the determination of the actual performance, which is used together with the expected performance identified at the organizational level analysis to determine the OPG.   


Conducting a comprehensive TNA is highly important to ensure that the specific elements leading to the OPG are captured in a training. For instance, the TNA for Apex limited provided in-depth information regarding the current OPG. Essential information was collected at the organizational, performance and individual level. For example, the TNA, immensely enhanced efforts to determine work related skills to be included in the training by providing detailed data regarding particular employees’ skills and knowledge as well as their expected performance with regard to the organizational performance. Therefore, the TNA provided adequate relevant to enhance the design of the required program.


Donovan, P., & Townsend, J. (2015). Learning Needs Analysis Pocketbook. Management Pocketbooks.

Kotlar, J., De Massis, A., Wright, M., & Frattini, F. (2018). Organizational Goals: Antecedents, Formation Processes and Implications for Firm Behavior and Performance. International Journal of Management Reviews20, S3-S18. doi:10.1111/ijmr.12170

Roughton, J., & Crutchfield, N. (2016). Assessing Training Needs. Job Hazard Analysis, 299-334.

2016 Summer Breakspot Program

2016 Summer Breakspot Program

Summer Break Spot Program is a collaboration of public and private institutions, which aim at bridging the nutrition gap during summer by providing nutritious meals to children eighteen years and below. Among the 2016 supporters of the programs in Broward city, Florida includes the local police through the Sheriff’s office. The program has been largely successful in the city regarding community outreach in law enforcement, hence, making the program a model for the 21st century policing. 

            Involvement of police in charitable programs assists in improving the relationship between the law enforcement agent and the community. Police-community relationship is very critical in enforcing law and order because the society plays an integral role in the identification of law-breakers. Being a charitable initiative, to the kids, the program is an excellent means of encouraging children to engage in good morals such as kindness, which assists in reducing current and future crime rates (Bayley, 2016).

            However, critics argue that such programs ‘plants’ dependence in children hence increases chances of the future adults engaging in crime. According to Lord, Kuhns, & Friday (2009) Psychological research shows that people who undergo a calm childhood with love and kindness are more likely to be law abiding than their counterparts with undergo difficulties at a young age. Additionally, Broward police attribute the fall in crime rates among children and adults in the city, to the program.    


Bayley, D. H. (2016). The Complexities of 21st Century Policing. Policing10(3), 163-170.

Lord, V. B., Kuhns, J. B., & Friday, P. C. (2009). Small city community policing and citizen satisfaction. Policing: An International Journal of Police Strategies & Management, 32(4), 574-594.

Big Data Visualization- Twitter

Big Data Visualization- Twitter


1.      Literature Review.. 2

1.1.       Strategies of Measuring Users’ Trustworthiness. 3         Machine Learning Approach. 3         Feature Based Methods, 7

1.2.       User’s Trustworthiness Visualization Tools and Techniques. 8         Truthy. 8         The Energy function. 8         J 48 Decision Trees. 9         Latent Dirichlet Allocation (LDA) Model 9         Bootstrapping. 9

1.3.       User’s influence Visualization tool and techniques. 10         Geo-scatter Maps with Links. 10         Data. 11         Small Multiples. 11         Matrix Diagrams. 12

1.4.       Big data visualization tools and techniques. 12

1.5.       Previous Studies Involving Various Credibility Analysis and Visualization Techniques and Tools  13         Twitter: A News Media. 13         Detecting Phishing and Spam on Twitter. 14         Assessing Credibility/Trust 16         Inflammatory and Hate Content on Social Media. 19

1.6.       References. 21

Big Data Visualization- Twitter

1.      Literature Review

The literature review chapter focuses on previous studies regarding Big data visualization especially techniques used in the analysis and visualization of internet based social media such as twitter users’ credibility. Recent studies focusing on visualizing and analyzing of on twitter content have also been used to reveal fake users. Resources with twitter based cybercrimes such as phishing, farming, and spreading of hate or inflammatory content have also been analyzed.

According to (Cheong, 2011), the existing social media differ both in content shared via the platforms and characteristics. Twitter is one of the most popular social media with unique characteristics and content shared on the platform. However, (De Longueville et al., 2009) notes that the type of content, which includes users’ information, opinions, and reactions, shared on Twitter corresponds to real life events. (De Longueville et al., 2009) also indicate that there exists an immense challenge to twitter content consumers in that information on the platform is highly polluted.

Therefore, extraction of information with good quality from all the generated content is required due to increased presence of inflammatory content, fake images, spam, phishing, advertisements, and rumors on the platform. According to (Ghosh et al., 2012), Micro-blogs such as Twitter are more appropriate for news based information sharing and dissemination because they are normally public; therefore, widens the contents’ audience range. Previous studies have been conducted using varying classical computation strategies such as characterization, classification, ranking, and user surveys to gather more information on the issue of trust on twitter (Zhang and Xiaoqing, 2014).

Some of the past studies conducted on the issue are based on various categories of classifiers such as decision trees, Naïve Bayes, and SVM to establish non-credible information, spam, and phishing on Twitter utilizing network, user, message, and features based on topic on the platform. Some previous researches have also utilized and improved ranking algorithms for questions regarding issues related to trust; for instance spam and credibility (Dilrukish and Kasun, 2014).

1.1.            Strategies of Measuring Users’ Trustworthiness

According to (Zhang and Xiaoqing, 2014), there exist several techniques of measuring and visualizing user trustworthiness on social media, which include machine learning; graph based, and feature based approaches. (Cheong, 2011) indicates that some of the machine-learning techniques are very useful for analyzing on twitter content and users by providing insights to such information. (Grier et al., 2010) also notes that graphical machine-learning techniques are also very critical because they enhance visualization of users’ credibility. Some of the widely used machine-learning techniques are briefly discussed below. Learning Approach Methods

(Cheong, 2011) defines machine learning as a technique of analyzing data, which automates the analytical model building. Machine learning enables computers to identify hidden insights without having to program them, whereby they utilize algorithms that iteratively learn from data. Although, machine-learning algorithms have been in existence for a considerably long time, the recent need to use complex mathematical calculations to analyze big data, with an improved speed has increased the demand for machine learning techniques. According to (Grier et al., 2010), machine learning evolved from the interest of researchers on artificial intelligence to identify whether computers can learn without having to be programmed. (De Longueville et al., 2009) indicate that there exist three categories of machine learning techniques, which are, supervised, unsupervised, and reinforcement approaches.

Supervised machine learning algorithms perform predictions on a particular set of samples (Arakawa et al., 2014). These kind of algorithms identifies patterns in the value labels assigned to data points. On the other hand, machine-learning techniques organize data into sets of clusters for structure description and simplification of complex data, without the use of labels. Reinforcement approaches utilize every data point to choose an action and later analyze the decision (Arakawa et al., 2014). According to (Corvey et al., 2012), the technique changes its approach with time to acquire best results ever.ïve Bayes Classifier Algorithm

One of the machine learning techniques is Naïve Bayes Classifier Algorithm, which enhances classification of a document, email, webpage, or lengthy texts. According to (Reyes and Smith, 2015), a classifier is a function, which allocates an element value of a population from one of the existing categories. For example, Naïve Bayes algorithm is commonly utilized in spam filtering. In such a case, a spam filter is a classifier that labels emails as ‘not spam’ or ‘Spam’. According (Chung, 2016) naïve Bayes classifier is one of the most popular techniques that utilize a probabilistic approach to develop machine-learning models especially for description of documents. The method is based on Bayes probability theorem that performs a subjective content analysis.

Figure 1: Naive Bayes graphical representation (Reyes and Smith, 2015).

Previous studies have used applied Naïve Bayes approach to investigate fake users and content on social media especially twitter. Currently, Facebook utilizes Bayes classifier algorithm to analyze updates that express negative or positive emotions. Google also uses the technique to identify relevancy scores and index documents. The technique is also very popular in analyzing and classifying technology related documents and filtering spam emails (Dilrukish and Kasun, 2014). K Means Clustering Algorithm

K means is a very common unsupervised machine learning approach utilized to perform cluster analysis (Alrubaian et al., 2016). The method is iterative and non-deterministic, whereby the algorithm operates on a specific set of data via a predefined cluster number K. K means method partitions the input data into K clusters. For instance, the approach can be to cluster Google Search results for the word Jaguar. In such a case, K means can be used to cluster documents according to the manner the word is used. For example, documents with the word Jaguar implying an animal can be clustered in one category, while those with Jaguar meaning a car can be grouped in a different set. The method has also been heavily applied to determine fake users by clustering content based on similarity and establish the relevance rate of content on internet based social media such as Facebook and twitter (Kim and Park, 2013). Vector Machine (SVM) Learning Algorithms

SVM is a supervised method of machine learning, whereby the set of data involved teaches the approach about the classes such that new data can be easily classified. The approach classifies data in to varying classes by establishing a line known as a Hyperlane that separates a data set into classes. SVM algorithms exist in two categories that are linear and non-linear support vector machine learning (Chung, 2016) Linear SVM’s separate the training data set using a Hyperlane, while non-linear algorithms do not utilize a hyperlane. According to (Grier et al., 2010), SVMs are very applicably in analyzing the stock markets, while they can also be used to classify data online-based social media.

According to (Dilrukish and Kasun, 2014), other methods of Machine learning techniques include Apriori, linear regression analysis that performs a comparison of two variable, and decision trees among others.

Figure 2: decision tree (Chung, 2016) Based Methods,

Features based methods of analyzing internet based social media include using text, network, and, propagation, and Top-element subsets. Text subsets include analyzing characteristics of messages such as average tweets’ length and sentiments features such as URLs. The network subsets include message authors whereby the number of friends and followers are included. Propagation subsets include tweets and retweets among other features. Finally, top element subsets include a fraction of tweets containing most frequent hashtags, URLs, mentions, among others (Dilrukish and Kasun, 2014).

1.2.            User’s Trustworthiness Visualization Tools and Techniques

According to (Chung, 2016), there exists numerous tools and techniques for visualization of user’s trustworthiness, which include graph, chart, and map based approaches. (Chung, 2016) also argues that trustworthiness techniques and tools are very critical in visualizing user’s credibility and influence, since they create a physical representation of data, thereby improving understanding. Previous studies have focused on several techniques and tools such as credibility and influence approaches whereby strengths and weakness associated with these strategies have been identified.

One of the tools used in visualizing users trustworthiness is Truthy which is an online based platform used for studying diffusion of information on twitter and computing the level of trustworthiness of micro-blogging, streaming publicly, that are related to a particular event, in a bid, to establish misinformation, political smears, astroturfing, among other social media pollution categories. Truthy utilizes a Boolean technique to analyze user’s content whereby, the tool returns a true or false value (Arakawa et al., 2014). Regarding visualization of user’s credibility, the tool is applied in identifying malicious information and graphically represents the data. Energy function

The energy function is a reliable tool, which enhances relational learning of large amounts of data from many applications such as internet based social media. (Grier et al., 2010) notes that the energy function involves embedding of multi-relational graphs in vector space that is both continuous and flexible, while enhancing the original data. Some of the techniques related to the function involve encoding the semantics of graphs to assign low energy values to components that are plausible. Graphs from the tool are a very relevant in visualizing data related to particular content (Awan, 2014). J 48 Decision Trees

According to (Jungherr, 2014), decision trees is another robust category of visualizing online data, whereby an algorithm known as an iterative Dichotomiser 3 is used to predict a new data set record’s target variable. The tool utilizes features based on attributes such as length and width of content to predict the objective attribute. The technique is very useful in classifying content and producing figures that improve visibility of data (Arakawa et al., 2014). (Cheong, 2011) also notes that the tool has been widely used to analyze and visualize credibility of online-based social media users including their content due the ability of the method to graphically display the process of classifying input. (Dilrukish and Kasun, 2014) also argues that since the algorithm can be used with other powerful tools such as Weka, a java application developed by Wakaito University in New Zealand, which improves preformatted data classification and visualization of the process, they are widely used in the analysis of on line content, which demand the use of powerful software. Dirichlet Allocation (LDA) Model

LDA model is a technique that utilizes Dirichlet distribution to identify topics in documents and provide a representation of the topics in percentages (Arakawa et al., 2014). According to (Dilrukish and Kasun, 2014), the technique is highly applicable in visualization of credibility of internet based social media such as twitter, especially in identifying inflammatory or hate content. (Dilrukish and Kasun, 2014) argue that, since the technique can establish topics with offensive words, and provide the data in percentages there by enhancing the process of separating malicious and genuine content, the method improves credibility of content and users visualization.

Bootstrapping is a statistical based approach utilized to analyze the efficiency of data analysis and visualization tools using random samples (Sharf and Saeed, 2013). The technique involves assigning of accuracy measures such as error prediction, confidence intervals, or variance to random estimates or samples. According to (Corvey et al., 2012), the technique is very useful in visualizing internet based social media users, whereby the method estimates the performance approaches used (to classify and analyze data), hence providing immense knowledge about the topic. In addition, included in the output of Bootstrapping is a graphical representation of techniques used to evaluate credibility, hence improving the visibility of data.

Some other approaches used in improving the visibility of users include rating, whereby approaches and user’s accounts are rated using features such as precision and the quality of content respectively (Verma, Divya and Sofat, 2015). Computing percentages of data involved in various analysis is another technique whereby visibility of credibility can be improved. The method may include charts containing the information regarding several mathematical computations and comparisons.

1.3.            User’s influence Visualization tool and techniques

Several tools and techniques exist that can be used to analyze and visualize users’ influence on twitter. Some of the existing tools and techniques have been applied to visualize user relationship and interactions on internet based social media. A number of users’ influence visualization techniques are discussed below. Maps with Links

According to (DilruKish and Kasun, 2014), a geo-scatter map with links, a node-link diagram that is overlaid on a map, is a natural fit for graphs that have geo-located nodes. The points representing the child commit’s user and the parent commit’s user are connected with a semi-transparent line to depict the build-upon relationships. (Anwan, 2014) notes that the lines become more opaque as the build-upon connections between nearby locations increase. A log-scaled circle to highlight diversity at location at the comparison’s accuracy expense is utilized to represent the number of commits made at that specific location. According to (Chung, 2016), the technique has been used to visualize friendships on Facebook and professional networks. Due to the ability to provide detailed analysis of connections between users and ability to graphically represent such relationships, Geo-scatter maps with links are highly applicable in visualizing Users’ influence on Facebook.

Data is one of the components of social media that can be highly in visualizing users’ influence. According to (DilruKish and Kasun, 2014), using tools such as crawlers, seed group of username can be formed, whereby for every data repository, contributor collaborator, and owner usernames, along with branch names can be established. Fresh usernames are utilized to identify new repositories while the branch names are utilized to establish commits. In addition, data from on line based social media can be used to construct, graphs, which can enhance more influence visibility. (Anwan, 2014) argues that content on twitter has been highly utilized in efforts to visualize users’ influence on the platform. Multiples

Generating of a matrix of maps from data is another technique used to increase the depth of analysis and improving visualization of users’ influence, which known as small multiples. According to (Anwan, 2014), comparison is the core of quantitative reasoning, whereby, small multiple designs, data bountiful, and multivariate techniques are used to visually enhance the comparison of changes. (DilruKish and Kasun, 2014) notes that the techniques enables visualization of some patterns, which enhance identification of unforeseen influences since they buck established trends. On twitter, visualization of users influence can be done using small multiple techniques by revealing user details such as relationships, activities, locations, among others. Diagrams

(DilruKish and Kasun, 2014) argues that although liked scatter maps provide a users’ influence visualization technique, whereby they enhance identification of critical patterns, in an intuitive approach, discerning more subtle relationships and connections using the method can be cumbersome and sometime impossible. Therefore, social links utilizing matrix diagrams are required, whereby they enhance improved visualization by minimizing clutter and enable perceiving of edge metrics, using visual encoding based on the honeycomb project. (Chung, 2016) defines a matrix as a grid, whereby every cell is used to represent a link metric while the columns and rows are used to represent nodes. Some of the metrics used in this users visualization technique include followers; the follow link number, asymmetry; relative difference between totals of followers in every direction, and deviation from the expected, which is the relative difference between the totals of actual links compared to the expected links from a random sampling of the node distribution.

(Polat, 2014) indicates that matrix diagrams are robust tools that can be used to visualize users’ relationships and connection on twitter, since they can be used to provide comparisons that enhance an in-depth understanding of the relationships. According to (DilruKish and Kasun, 2014), using the matrix diagrams it can be easy to identify twitter accounts causing much of user influence on the platform, by offering an abstract representation of user relationships.

1.4.            Big data visualization tools and techniques

Regarding Big Data visualization, the energy function discussed above is a tool widely used to improve the visibility of large amount of data such as data regarding internet based social media. According to (Ghosh et al., 2012), Big data visualization can be challenging due to the volume of content involved and the complexity of big data platforms such as browsers and some websites. Therefore, (Grier et al., 2010) argues that robust data analysis and visualization tools are required. One such tool is the energy function that assists in providing an in depth analysis of online data and provides graphical presentation of output; hence highly improves visibility. According to (Dilrukish and Kasun, 2014), possibility of using powerful software to analyze J 48 decision trees process of classifying data makes algorithms highly favorable in the analysis of big data. (Dilrukish and Kasun, 2014) argues that J 48 decision trees classification process is highly visible using software such as Weka.

1.5.            Previous Studies Involving Various Credibility Analysis and Visualization Techniques and Tools Twitter: A News Media

Immense research has been conducted in efforts to analyze the relevance of internet based social media especially Twitter as an agent of news disseminating. (Grier et al., 2010) revealed that twitter is one of the prominent news internet based social media, whereby eighty-five percent of discussions’ topics on twitter were found to be related to news. In the research, the patterns of tweeting activities and parameters particular to users such as followees and followers analysis versus retweeting/tweeting numbers, relationship was also highlighted.

A study by (Albalawi and Sixsmith, 2015) also utilized topic-modelling approach (unsupervised) to conduct a comparison of news topics extracted against those from New York Times, which is a conventional medium for news dissemination. The study revealed that; although twitter users post a less interest on world news compared to conventional media consumers, they still actively spread news regarding essential world events. (Chung, 2016) also conducted another critical study regarding twitter as a news media using nine hundred news events in 2010-2011, whereby the scholar, demonstrated techniques to map news’ event related tweets utilizing the energy function. The researcher proposed some strategies for mapping the tweets that act as novel event detection methods. Detecting Phishing and Spam on Twitter

Existence of malware, phishing attacks, spam, and compromised accounts is a primary source of concern regarding twitter information quality. Previous studies have examined numerous mechanisms to filter phishing and spam and proposed a numbers of effective solutions. According to (Grier et al., 2010), one of the most prominent issues on the social media; for instance Facebook and twitter, is Phishing. (Albalawi and Sixsmith, 2015) indicate that every year, genuine users of such platforms lose large amounts of money (millions of dollars) to phishing related frauds. A study by (Chung, 2016) revealed the contribution of URL shortener services such as in the spread of phishing whereby the research established that such services enhanced phishing by hiding the identity of links. The scholar also demonstrated the increase in popularity of social media to match the e-commerce related sites such as PayPal with regard to phishing.

The researcher, in a bid to validate the study utilized blacklisted phishing urls sourced out from phish tanks. In a follow up research, (De Longueville et al., 2009) using varying features like URL, tweet, and WhoIs based features, established features, which point to phishing tweets. The scholar using such features, identified phishing tweets with a 92.5 percent accuracy. The primary deliverable of the study was a chrome Extension, which was deployed for identifying phishing on twitter in real-time. (Ghosh et al., 2012) also identified that out of twenty-five million URLs posted on twitter, 8% point to malware, phishing, and frauds that appear on famous blacklists.

The scholar also established that legitimate users’ compromised accounts were the major avenue used in spam spreading compared to spammers’ created dedicated or fake accounts. (Corvey et al., 2012) also conducted a study leading to characterization of farming on twitter and proposed a method of fighting link farming on the platform. According to the scholar, link farming is a technique used to improve twitter accounts rank by linking them to each other. The study also recommended a ranking strategy aimed at punishing users following spammers, whereby the study also revealed a substantial decrease of spammers including their followers on the network. In an analysis of cyber criminals’ ecosystem or community, (Polat, 2014) also established the manner in which the offenders create a minute global network. The researcher nicknamed accounts with extra ordinary large numbers of followings and followers as social butterflies.

In the study, the scholar also proved a method to establish new offenders on twitter from a group of criminals known as criminal account inference algorithm. The algorithm propagates malicious score from users to followers including other social engagement to identify the likely malicious accounts. Using a real world dataset, the study also evaluated the algorithm’s performance. (Alrubaian et al., 2016) also used machine-learning strategies to establish spammers, whereby they utilized URL searches, keyword detection techniques, matching the username patterns, and achieved a 91 percent accuracy. The study also established that the frequency of spammers’ tweeting was slightly higher compared to legit users and their accounts were not as new as expected.

Spamming, on other social media such as YouTube, a video sharing site, is a category of users. According to (Dilrukish and Kasun, 2014), there exists three categories of real users of YouTube, which are, legitimates, promoters, and spammers. (Alrubaian et al., 2016) also insightfully characterized phantom profiles for applications related to gaming on Facebook. Using the characterization, the researcher, identified the differences between activity and behavior of phantom and legitimate user profiles. The study was conducted using an online game known as Fighters club to establish how many of the total user applications were legitimate or phantom. Assessing Credibility/Trust

Previous research by the computer science community has focused on the issues of assessing, analyzing, computing, and characterizing credibility and trust of internet based social media (Dilrukish and Kasun, 2014). One of such studies was conducted by (Chung, 2016), whereby, the scholar developed Truthy, a tool for studying diffusion of information on twitter and computing the level of trustworthiness of micro-blogging, streaming publicly, that are related to a particular event, in a bid, to establish misinformation, political smears, and astroturfing, among other social media pollution categories. In the study, several cases of abuse by twitter consumers were presented using Truthy, which is a live web service based on the above descriptions.

Figure 3: a screenshot of Truthy analysis (Chung, 2016).

Several researchers have also utilized the classical method of machine learning to establish the credibility of online social media content. (Choukri et al., n.d.) established that classifications techniques bearing some form of automation can be applied in differentiating news and conversational related topics and evaluated credibility of such approaches using varying twitter features. Using an algorithm known as J48 decision classification tree, the scholar scored a 70-80 percent precision and recall and evaluated the study results using data perceived by people as the ground truth. Features used in the study include topic, message, user, and features based on propagation, which enhanced observations like tweets with negative sentiments have a relationship with credible news and those without URLs are mostly related to news that are not credible.

(Corvey et al., 2012) argues that apart from the credibility of content shared on online social media, the users’ credibility is also critical. (Ghosh et al., 2012) conducted a study using automated ranking techniques to detect information sources on twitter credibility and identify any trust expertise related to that source. In the study, it was also observed that network structure and content are very prominent features in the ranking of twitter users based on effective credibility.

Some previous studies also focused on analyzing genuine information sources during specific important world events. For instance, (Alrubaian et al., 2016) in an analysis of tweets posted during the Mumbai terrorist attacks, established that the largest number of information sources are unknown and possess a considerable low number of followers; hence, have a reduced reputation on twitter. The study indicated the necessity to come up with twitter information credibility assessing mechanisms that are automated.

In a follow up study, the scholar used SVM Rank, machine learning algorithms, and relevance feedback, techniques for information retrieval, to evaluate credibility of content on twitter. The researcher conducted an analysis of 14 events, with high impact, of 2011, whereby they established that 14 percent of posted tweets regarding an event were while 30 percent possessed situational informational about the event. The study also identified that only 17 percent of tweets related to an event possessed credible situational information about the event.

(Dilrukish and Kasun, 2014), also utilized a supervised Bayesian Network, which a technique for predicting tweets in emergency situations credibility, to analyze tweets generated in the 2011 England riots. In the study, a two-step methodology was proposed and evaluated, whereby step one involved a K-means function to for detecting emergencies and the second step a Bayesian Network structure learning function was utilized to determine the credibility of the information. The algorithm evaluation revealed an improvement compared to new techniques. (Chung, 2016) also used eight different event tweets to identify the credibility indicators in varying situations, whereby the study indicated tweets length, mention tweets, and URLs as the best credibility indicators The study also revealed that during emergencies, such features immensely increases.

(Chung, 2016) conducted a study utilizing a different approach than the one highlighted above, whereby the scholar carried out a survey to establish the perception of users concerning content on twitter. The study involved about two hundred participants to mark what was considered as the credibility, of users and content, indicators, whereby, the research established that people utilize features visible at a glance, such as user’s photo or username, to identify credibility of content (Awan, 2014).

In addition, (Chung, 2016) also proved that, using content alone, users are poor judges of credibility, whereby they are oftenly influenced by other pieces of information such as username. The study also revealed that a disparity exists between features utilized by search engines and those considered by the users as relevant regarding credibility. (Ghosh et al., 2012) also utilized a different approach to identify users with high value of trustworthiness and credibility, whereby the scholar established topic experts on tweeter. The technique used in the study was based on the twitter crowd concepts, that is, twitter lists. Inflammatory and Hate Content on Social Media

Previous studies have revealed that social media has been utilized to instigate hate. According to (Dilrukish and Kasun, 2014), if inflammatory content is propagated during volatile situations in real life, can have many adverse effects. (Grier et al., 2010) conducted one of the few existing research that analyzed hate related content on twitter or YouTube, whereby the researchers identified utilized semi-automated techniques to establish content used to spread hate on YouTube. (Grier et al., 2010) identified hate users, virtual communities, and videos using social network analysis and data-mining techniques, with a precision of 88%. Specifically (Grier et al., 2010) used bootstrapping techniques to detect hate on YouTube. (Chung, 2016) also used topic modelling and machine learning techniques to detect offensive content on twitter, whereby the researcher-outperformed keyword matching techniques by achieving about 75% of true positive rate. The scholar applied a seed lexicon of offensive words followed by LDA models to discover topics. The study established that most of the words in a specific category of topic were not offensive but formed sex related phrases upon combination with other words.

 The content analyzed above reveals various approaches of directly identifying fake users and some other indirect methods by identifying malicious content and actions on twitter. Various credibility analysis and visualization approaches highlighted which include feature and graphical based machine learning techniques, and the energy function. For instance, the LDA model, has been identified as one of the most powerful techniques of identifying topics with offensive content and enabling visualization of the credibility of internet based social media, whereby the method provides a percentage representation of topics with offensive words. Several surveys and case studies have also been utilized to articulate arguments made in the chapter. In addition, various techniques of detecting phishing and spams, such as, analyzing the URLs have been identified. The literature review reveals that a myriad of techniques of identifying fake twitter exists. However, the problem of fake internet users still exists; hence, the need for more efficient techniques of analyzing and visualizing of twitter content.

1.6.            References

Albalawi, Yousef, and Jane Sixsmith. “Identifying Twitter Influencer Profiles for Health Promotion In Saudi Arabia”. Health Promotion International (2015).

Alrubaian, Majed, Muhammad Al-Qurishi, Mabrook Al-Rakhami, Mohammad Mehedi Hassan, and Atif Alamri. “Reputation-Based Credibility Analysis Of Twitter Social Network Users”. Concurrency and Computation: Practice and Experience 29, no. 7 (2016): e3873.

Arakawa, Yui, Akihiro Kameda, Akiko Aizawa, and Takafumi Suzuki. “Adding Twitter-Specific Features To Stylistic Features For Classifying Tweets By User Type And Number Of Retweets”. Journal of the Association for Information Science and Technology 65, no. 7 (2014): 1416-1423.

Awan, Imran. “Islamophobia and Twitter: A Typology Of Online Hate Against Muslims On Social Media”. Policy & Internet 6, no. 2 (2014): 133-150.

 Choukri, T. Declerck, M. U.Do?an, B. Maegaard, J. Mariani, J. Odijk, and S. Piperidis, Eds., European Language Resources Association (ELRA).

Chung, Jae Eun. “A Smoking Cessation Campaign On Twitter: Understanding The Use Of Twitter And Identifying Major Players In A Health Campaign”. Journal of Health Communication 21, no. 5 (2016): 517-526.

Corvey, W. J., Verma, S., Vieweg, S., Palmer, M., and Martin, J. H. Foundations of a multilayer annotation framework for twitter communications during crisis events. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12) (Istanbul, Turkey, may 2012), N. C. C. Chair),

De Longueville, B., Smith, R. S., and Luraschi, G. ”omg, from here, i can see theflames!”: a use case of mining location based social networks to acquire spatio-temporal data on forest fires. In Proceedings of the 2009 International Workshop on Location Based Social Networks (New York, NY, USA, 2009), LBSN ’09, ACM, pp. 73–80.

Dilrukshi, Inoshika, and Kasun de Zoysa. “A Feature Selection Method For Twitter News Classification”. International Journal of Machine Learning and Computing 4, no. 4 (2014): 365-370.

France, Cheong, C. C., Social media data mining: A social network analysis of tweets
during the 2010-2011 australian floods. In PACIS (2011).

Ghosh, S., Sharma, N., Benevenuto, F., Ganguly, N., and Gummadi, K. Cognos:
crowdsourcing search for topic experts in microblogs. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval (2012), SIGIR ’12.

Ghosh, S., Viswanath, B., Kooti, F., Sharma, N. K., Korlam, G., Benevenuto, F., Ganguly, N., and PhaniGummadi, K. Understanding and combating link farming
in the twitter social network. In Proceedings of the 21st international conference on WorldWide Web (2012), WWW ’12.

Grier, C., Thomas, K., Paxson, V., and Zhang, M. “@spam: the underground on 140 characters or less”, In Proceedings of the 17th ACM conference on Computer and
communications security (New York, NY, USA, 2010), CCS ’10, ACM, pp. 27–7.

Jungherr, Andreas. “The Logic Of Political Coverage On Twitter: Temporal Dynamics And Content”. Journal of Communication 64, no. 2 (2014): 239-259.

Kim, Young An, and Gun Woo Park. “Topic-Driven Socialrank: Personalized Search Result Ranking By Identifying Similar, Credible Users In A Social Network”. Knowledge-Based Systems 54 (2013): 230-242.

Polat, Burak. “Twitter User Behaviors In Turkey: A Content Analysis On Turkish Twitter Users”. Mediterranean Journal of Social Sciences (2014).

Reyes, Joseph Anthony L., and Tom Smith. “Analysing Labels, Associations, And Sentiments In Twitter On The Abu Sayyaf Kidnapping Of Viktor Okonek”. Terrorism and Political Violence (2015): 1-19.

Sharf, Zareen, and Anwar Us Saeed. “Twitter News Credibility Meter”. International Journal of Computer Applications 83, no. 6 (2013): 49-51.

Verma, Monika, Divya Divya, and Sanjeev Sofat. “Techniques to Detect Spammers In Twitter- A Survey”. International Journal of Computer Applications 85, no. 10 (2014): 27-32.

Zhang, Yifeng, and Xiaoqing Li. “Relative Superiority of Key Centrality Measures For Identifying Influencers on Social Media”. International Journal of Intelligent Information Technologies 10, no. 4 (2014): 1-23.