Posted: March 6th, 2022
Emerging APT from China: Focus on APT 17
Computer Sciences and Information Technology
Topic:
Cyber Security APT
Emerging APT from China: Focus on APT 17, 18, 19, and 40 to control and mitigate Chinese State-Sponsored
Table of Contents
1. Executive Report 2
1.1. Security Problem Under Investigation 2
1.2. Background Information on Advanced Persistent Threats 3
1.3. A Root Cause Analysis of the Problem 8
1.4. A Description of the Stakeholders 8
1.5. An Analysis of the System and Processes 8
1.6. A Description of the Project Requirements 8
1.7. Data Available 8
1.8. The Industry Standard Methodology for Designing and Development 8
1.9. Deliverables Associated with Design and Development of Technology Solution 8
1.10. Implementation Strategy and Expected Outcome 9
1.11. QA Plan for Solution 11
1.12. Assessment of the Risks of the Implementation 11
1.13. Technology Environment Tools, Related Costs, and Human Resources 11
1.14. Project Timeline and Milestones 11
1.15. Framework for Accessing the Project 11
1. Executive Report
1.1. Security Problem Under Investigation
The Advanced Persistent Threat (APT) has been a complex and intransigent cyber threat in the last ten years. When APT assaults are carried out through a variety of acts, which include social engineering, command-and-control-servers, phishing, as well as remote desktop control, existing anti-virus systems are becoming ineffective because they have been built to deal with typical stand-alone APTs. In comparison, data transfer from the infected network to APT actors is typically well hidden and incorporated in regular delivery, alleviating the identification of APT attacks to the extent where even large anti-virus companies are not aware of the ratio of APTs to actual attacks. To make matters worse, APT perpetrators tend to be well-organized and potentially government-funded gangs of hackers and experts who can create and manage specially developed programs for their respective objectives and manipulate hacked information. While most attempts to protect against the APT attacks rely on related technology, this study argues the value of developing a systemic awareness by examining the actions and ATP attacks and improvements of actors. The current research explains the arrival of technology and APTs, on the one side, and the behavioral shifts of attacking gangs. Through doing so, the current study is supposed to lead to the creation of a simpler roadmap for actors that cyber protection services and APTs may use as a guideline. Since 2013, Chinese APT has been involved in targeting and stealing knowledge from the U.S. Government, the Aerospace Industry, Foreign Governments, Biotechnology Businesses, and Information Technology Firms. Such attacks are coordinated across sophisticated malware mechanisms that stealthily bypass established protection controls and maintain a presence in host systems for a considerable period, contributing to robust data exfiltration (Fireeye, n.d.). These attacks manipulate zero-day vulnerabilities, allowing them to unleash catastrophic losses on target organizations, according to the Fire-Eye study.
1.2. Background Information on Advanced Persistent Threats
They exist several aspects in which security has described APT to have not contributed to a clearer understanding. This paper is intended to provide you a rundown of the basic features of APTs, how they normally operate, and what kind of security is available to help minimize the possibility of an attack. Network defense is more about making sure you plug the gaps so an intruder might get into. However, you will need safeguards to spot indicators of an on-going threat to avoid it from happening. It is important to clarify how a multi-faceted security solution against APTs, including security protocols, will minimize attack chances.
ATP and the Changing Landscape of Threats
Studies about APTs start with alarming references to the evolving threat environment and stories of how highly advanced APTs have become more widespread. That may be false. Most APTs utilize many of the tactics used for years— phishing emails, social engineering, backdoor bugs, and drive-by installs, to list the main ones. Such APTs are neither advanced nor especially complex when broken down into their different pieces and always depend on the weakest connection of any organization—the customer. However, the way hackers utilize variations of tactics, and hackers’ relentless actions are something that sets APT apart from other efforts to breach protection. Over the last few years, the word APT has been commonly known and abused. You can sometimes hear the word advanced goal assault (ATA), which usually applies to the same issue. APT and ATA are often used to characterize anything from assaults on high-profile corporations or nation-states to different cybercrime operations, hacking tactics, or even individual malware. It has been impossible for many businesses to look through the hysteria and realize what the APT is and is not—and what they should do to avoid or at least identify it.
Common ATP Traits
1. Targeted: APTs are often aimed toward a single agency, party, or sector. Before the assault, thorough analysis on the part of the hackers will be needed to gather information regarding their target. These groups of hackers are typically well-funded and well-coordinated.
2. Goal-oriented: Attackers usually know what they intend to do or access when they get in. With adequate intelligence, the hackers would have a range of choices to infiltrate the network and reach the data or systems they seek.
3. Persistent: Successfully executing discovered a path into a network, the first compromised client will not actually be of much importance, but is more of a means to an end. While within, those attackers are likely to travel progressively deeper through the network and target networks that have access to more sensitive data, e.g., IT managers or senior executives that have the authority to access higher value systems.
4. Patient: As many cyber-attacks are planned to wreak havoc by restricting device entry and stealing info, APTs are more likely to do little initially. The intention is that the assault can go unseen, and the easiest approach to achieve that is to stop drawing suspicion in the first place. This non-activity will extend for days, weeks, months or even years.
5. Call home: No assault is complete without contact to the outside world. At some point, they are going to call home. They may do so after the first device has been infected, or after the data they have been targeting is found and collected, or when the infected systems have adequate access to the data. Connection with the host command – and – control (C&C) is usually a repetitive procedure to obtain additional orders or to start collecting data in bite-sized bits.
The typical APT lifecycle
The general life cycle of the APT is the simplest to illustrate based on descriptions of methods that have been successfully used in the past. The example below is not an exact APT but is focused on specific cases. The order of some of the phases is interchangeable—e.g., lateral movement can begin before data is located; and other phases sometimes arise during the whole life cycle, e.g., contact with the C&C host.
Figure 1The typical APT lifecycle
1. Collecting intelligence: A community of attackers has planned to threaten a corporation in the pharmaceutical business. Their goal is to collect data on a potential medication that is currently being produced that will offer the target business a competitive edge. They research the Internet, particularly social media platforms, and attend trade fairs to build a business image to find out about some main employees. Top Sources of Intelligence—Internet and Social Media: While staff do not share classified company details, business trip tweets and organizational activities may offer an opportunity that attackers are searching for.
2. Finding a point of entry: based on the data obtained in Phase 1, the hackers are informed that the organization plans to host a sales kick-off conference in Las Vegas in the spring. In this experience, they realize that multiple workers at all ranks would be in the same position at the same moment. They visit the function venue around the day of the event but leave multiple USB keys in the conference room and the places they know the staff of the organization will be there. They hope that either interest will get the best of them, and that they will plug in the USB stick to figure out who is on it or just plug it in, thinking it will take them to the user. However, enticing as it may be the USB key abandoned lying around might hold just about everything. Similar techniques have been used with Stuxnet, the APT that threatened Iran’s nuclear program in 2010. The USB keys were left in the parking lot that the perpetrator realized would be used by the nuclear plant workers.
3. Call home: If one of the workers has plugged in the USB key, the ransomware that it carries will notify the attackers that it has successfully signed in. To this end, it is configured to call home the command-and-control (C&C) server and document where new commands are or are obtained. In certain instances, this first call home is often intended to upgrade the malware to something more acceptable for the next phase of the attack. In the traditional APT lifecycle, contact with the C&C host is a repetitive procedure over the entire lifecycle. This makes it easier for malware to evolve as more information is learned. Call home is the stage when certain protection solutions might be undone—if one is not monitoring the incoming and outgoing traffic, there may be contact taking place that is not of the user’s favor.
4. Searching for data/assets: After attacking the first recipient, the ransomware gets a first view at the network from inside. No amount of research will offer such information unless insider experience is present. Based on the user’s passwords, the attackers might have links to programs of concern, or they may only be the first move stone. In this scenario, where the compromised client belongs to an institutional employee and has no connection to medication data systems, the next move will be to determine which systems are of concern and who has access to them to establish the next measures. Port scanning is a common technique used to identify which structures a device has access to which may contain useful stats. Antivirus, application monitoring and intrusion protection technologies can identify a range of harmful port scanning applications.
5. Moving throughout the network: The perfect goal for any such assault is the IT department, since they typically have access to a larger variety of networks than a normal employee. When the attackers have learned who the IT workers are and how quick it would be to access their computers, one of the two items will usually happen: (a) The hackers may compromise other clients as starting points to reach their goal. b) An intruder will initiate a more assault on reaching these networks more easily, e.g., social engineering, or a network flaw directly aimed against IT workers. Previously established bugs can create many opportunities for hackers to compromise a network. It is also important that all applications are held up to date with protection updates.
6. Extracting data: After our adversaries have entered the networks of interest and have identified the data they are searching for, retrieving the data is the next challenge. This is the point of attack where the virus can make regular contact with the C&C host, since it is likely to retrieve data in thin, encrypted sections to prevent detection. Apart from detecting outbound traffic, you require surveillance measures in place that display suspicious behavior trends. Getting complete real-time monitoring capabilities—including historical records—readily available may help recognize traffic peaks for hosts or data forms, such as encrypted files.
Summary of the Problem Approach
Security Networking Security Layers was proposed as the most feasible way to resolve the ATP threat challenge. As far as the issue is concerned, protection frameworks can be designed to be developed on a particular layer of the OSI network layer model. The first is protection in the framework layer. The focus of these protection measures added to this layer is the individual application. Separate protection mechanisms will be required for various forms of application. The programs ought to be updated to maintain the protection of the application layer. It is very complex to develop a cryptographically sound program protocol, and to execute it correctly is much more daunting.
Consequently, application-layer protection protocols for securing network communications are considered the only standard-based implementations in operation for some time. Stable Multipurpose Internet Mail Extensions (S/MIME), typically used to encrypt e-mail messages, is a clear illustration of an application layer protection protocol. DNSSEC is also another protocol on this layer used to share DNS demand messages safely. The security steps include:
• Transport Layer: Protection mechanisms in this layer may secure data in a single contact session between two hosts. The most popular usage for transport layer security protocols is the defense of HTTP and FTP session traffic. Transport Layer Protection (TLS) and Stable Socket Layer (SSL) are the most common protocols for this reason.
• Network Layer: Protection mechanisms on this layer can be extended on all applications; therefore, they are not application-specific. This layer can cover every network communication among two hosts/networks without changing any program.
In certain environments, network layer protection mechanisms such as Internet Protocol Security (IPsec) offer a far safer alternative than transport or application layer controls due to challenges in applying safeguards to specific devices. However, protection protocols on this layer have low communication stability than certain implementations can need. By the way, a security system configured to run on a higher layer cannot provide safety for data on lower layers since lower layers conduct functions which higher layers are not aware of. It could also be appropriate to implement several protections measures to improve network security. In the following pages, a wide-ranging discussion of the protection framework used in the various layers of OSI networking architecture to attain network security against APT will be presented.
The goal and objective of the project
As mentioned in the above section, there are a significant number of weaknesses in the network that occur as a consequence of APT. The weaknesses ensure the data is particularly susceptible to attacks during transmission. An intruder may target a communication link, extract data, and interpret the same or re-insert a false message to accomplish his sinister objectives. Thus, the current network protection initiative is not only concerned with the security of computers at either end of the connection chain but also seeks to ensure that the whole network is safe. Therefore, the project’s goals and objectives would be to secure the accessibility, stability, credibility, and protection of the network and data against APT risks. Efficient network encryption prohibits APTs from joining or disseminating on a network. The main objective of network protection is thus secrecy, honesty, and availability. It is worth remembering that these three foundations of network defense are also described as the CIA’s triangle.
• Confidentiality: The confidentiality purpose is to shield sensitive business data from unauthorized individuals. The confidentiality aspect of network protection guarantees the data is only accessible to expected and designated individuals.
• Integrity: This objective requires preserving and ensuring the authenticity and quality of the results. The honesty role guarantees that the data is accurate and that unauthorized parties may not alter the data.
• Availability: The role of network protection availability is to ensure that data, network resources/services are accessible to legitimate users on an ongoing basis whenever appropriate.
The Expected Results
While maintaining that network protection seems to be quite easy, the outcomes to be obtained seem clear. In fact, the methods used to accomplish these objectives are highly nuanced, and interpreting them requires sound reasoning. Per the International Telecommunications Union (ITU) guidelines on security architecture X.800, the initiative aims to incorporate such frameworks that would further standardize the approaches used to achieve network security. These measures will provide Encryption, Digital Signatures, and Access Control.
• Encryption: This process would contribute to data confidentiality facilities’ availability by translating data into unreadable formats for unauthorized individuals. This mechanism utilizes a hidden key encryption-decryption algorithm.
• Digital Signatures: Some of this system would serve as an automated version of ordinary signatures in digital information, and its function would be to facilitate the validity of the data.
• Access Control: This mechanism can go a fair way in enhancing and delivering access control facilities and can use an individual’s authorization and identity to assess and implement an entity’s access rights.
After designing and defining the different protection measures to achieve network security against ATP attacks, it would be of considerable significance to determine where to implement them, both physically and theoretically (in which layer of infrastructure such as TCP/IP).
1.3. A Root Cause Analysis of the Problem
Most cyberattacks need human activity to be carried out – activating a macro, opening a file, following a connection, or opening a document – demonstrating social engineering’s value to allow effective attacks. Harris (2019, p. 45) study from Proofpoint, which illustrates how cyber attackers exploit humans rather than networks and technology to install ransomware, facilitate fraudulent transactions, and steal crucial information. The study, focused on an 18-month review of data gathered through the global consumer base of Proofpoint, also found: Microsoft lures remain a staple. Almost 1 in 4 phishing emails sent in 2018 were related to Microsoft goods. 2019 witnessed a change in productivity towards cloud computing, DocuSign, and Microsoft cloud phishing. The top phishing lures are based on credential fraud, generating feedback loops that could theoretically warn possible APTs, lateral movements, internal phishing, and more.
Threat actors refine their methods and strategies in the quest for financial benefit and identity stealing. Although one-to-many APTs, as well as the one-to-one attacks, were more prevalent as impostor attacks started to surface, threat actors are seeing success with attacks utilizing more than five accounts against more than five people in targeted organizations. Over the past 18 months, top malware families have regularly featured banking Trojans, information hackers, Mice, and other non-destructive strains engineered to stay resident on compromised computers and to continually steal data that could theoretically be of future interest to threat actors.
Human-centric risks
Attackers attack individuals – not usually conventional VIPs. They also target Very Attached People (VAPTM) situated deep inside the company. These people are more likely to be targeted for openings or others with quickly searched addresses and connections to funds and confidential details. Nearly 36 percent of VAP identities could be traced digitally via company blogs, social networking, magazines, and more. For VIPs who are also VAPs, almost 23 percent of their email addresses may be identified by a Google search. Hackers imitate industry routines to prevent detection. Impostor message distribution strongly resembles genuine corporate trends in email flow, with fewer than 5 percent of all messages sent on weekends and the highest majority – over 30 percent – sent on Mondays. Malware players are less inclined to pursue the planned email traffic.
Email Attacks: At Risk Verticals
Schools, finance, and marketing have rated the industries with the greatest percent Attack Index, a cumulative measure of attack strength and risk. The education industry is also aimed at attacks of the greatest severity and has one of the highest average rates of VAPs across industries. The financial services sector has a very high average Attack Ranking but less VAPs. Impostor attacks at their height in the manufacturing, industrial, and education industries averaged more than 75 attacks per business in 2018. This could be due to the complexities of the supply chain associated with the engineering and automotive industries and the high-value targets and vulnerabilities of customers, especially among students, in the education sector. In the first half of 2019, the most concentrated sectors moved to financial services, manufacturing, schooling, health care, and shopping.
1.4. A Description of the Stakeholders
The project stakeholders are Data Security personnel and government departments accountable for the safety of sensitive assets. Cyber Defense Staff were responsible for detecting, assessing, alleviating APT attacks and other risks. Around the same period, federal departments offer legal and logistical advice on tracking and punishing offenders. Collectively, these partners maximize the initiative’s performance, allowing it to accomplish its strategic priorities in the public and private sectors. The network technology sector is growing into an interactive 21st-century corporation and has earned global attention as a critical national security component. Previously, cyber protection was treated as responsibility for software developers. While the need for information protection grew more prominent, entities continued to devote extra capital to secure digital networks. The two main players of the computer protection business are private, and government agencies, and each sector have its own specific strengths and price tags. The Government’s cyber element could be further subcategorized into civilian as well as government contractors.
This field has seen technology companies expand rapidly in the private sector. Organizations including Amazon, Apple, Facebook, and Google have experienced record development over the past ten years. These companies provide revolutionary products and services that people desire, most of which are not available elsewhere, and are so popular that they can comfortably afford to recruit the best and best computer scientists in the globe. Four of the dominant data protection accounting companies are Deloitte, PWC, EY as well as KPMG. These four firms provide specialized resources, such as penetration monitoring, IT enforcement auditing, and event management. These programs are often mandated under numerous federal government legislation, such as the necessary protection of consumer records. The data protection sector has thousands of smaller companies that specialize in a single service product. Despite these specializations, smaller companies’ efficiency has traditionally been lacking relative to the dominant accounting firms. The U.S. in June 2018. DoD released NIST Special Publication 800-171, and Protecting Controlled Unclassified data in Non-Federal IS and Organizations, which mandates cyber protection specifications for unclassified networks. To conform with section 171, a substantial portion of the business sector is used to reach federal standards. Consulting companies must offer outstanding services; otherwise, they can be lawfully kept responsible for not identifying flaws during their assessment.
Compared to the private sector, the Government has been unable to retain a viable data security workforce. We have long regarded the NSA as the leading power in the cyber realm within the sector. The Department provides some of the strictest criteria for candidates, sometimes superseding the private sector’s requirements. Although one Department has maintained its attempts to secure classified networks, other organizations are struggling at an unprecedented pace. Salary is one of the more noticeable problems with retention in the federal cyber world. The business sector spends about twice as much as the Government can manage, rendering retention a daunting challenge. Especially in entry-level jobs, federal agencies try to balance what the private sector provides to college graduates. Both Homeland Security and DoD are also providing advanced pay rates to promote data security careers, but these compensations pay grades are also significantly smaller than those of the private sector.
Simultaneously, like the great space race of the previous century, the computer security business wants a vital boost. The shockingly apparent shortage of cyber expertise is going to take years to fill. Recent developments in science, mathematics, technology, and engineering education could avoid the shortage, but professional cyber skills would take years to create. This soon becomes a lifetime engagement in a highly diverse landscape. Schools are starting to teach cybersecurity, yet they are just scratching the surface of the problem. Academic schools expose students to advanced computer science techniques, but basic undergraduate schooling does not sufficiently train them for a promising future in the data security industry. APTs are changing, technology is improving, and practical approaches are rapidly becoming outdated. Necessary skills (incident response, encryption, digital forensics, data recovery, log review, patch protection, etc. are extremely technical and need finesse; only one analyst can cover too much. This condition is the most important commodity in computer defense, the human imagination, the most vulnerable environment.
1.5. An Analysis of the System and Processes
Protocols and Networks
The network consists of collecting devices linked to each other by some contact medium. The contact channel can consist of any actual wired or logical wireless medium and any electronic system known as a node. Computers and printers are some of the representations of nodes on a computer network, and if we speak about a telephone network, they may be cell phones, poles, and main control units. The feature of a node in the network is that it has its own reputation in the context of its special network identification. The key role of any network is to divide resources between nodes. Under such laws, the network seeks resources and then transfers them between the nodes in such a manner that authenticity and protection problems are ensured. Network protocols are the guidelines for communicating between network nodes. The protocol is a complete collection of rules controlling the relationship of the two structures. It differs with various job assignments between contact nodes.
An open framework Integrated Model
In 1997, ISO or the International Standard Organization defined a standard contact structure for heterogeneous systems in the network. As for contact device features in the open world, this system is named the OSI. The OSI reference model offers a basis for the breakdown of dynamic inter-networks into components that can be more readily interpreted and used (Stellflue 2020, p. 67). The aim of OSI is to enable any device anywhere in the world to interact with any other if they comply with OSI standards. The OSI reference model is used at seven stages, and every stage in the OSI Model has its own functioning functionality; these levels are separated but, on the other hand, cascaded to each other and have contact functionality in a proper flow between them. Concerning the above standard communication structure, this group of layers is classified as OSI layers. Each layer’s functionality varies from that of each layer, and each layer has various levels and marks.
On the other side, if we look at the OSI device configuration, three layers of complexity are clearly recognized: architecture, service requirements, and protocol specifications. The OSI service specification is responsible for the basic resources between the customer and the device in a specific layer. The parallel OSI protocol requirements are liable for the form of protocol operating against the communication service. It is also obvious that the synthesis of these two sections becomes the framework of the OSI method.
The OSI model comprises seven layers, and each layer provides various functionalities, different protocol services. While each layer, except for the lowest layer, covers the lower layer, essentially isolating it from the higher layer functions. Analogous to the construction concept of covering data, the lower layer is associated with a higher degree of data; the upper layer is exclusive of these specifics. All facilities are given to the next higher layer within each tier, and the peer layer protocol is provided to the other system. Therefore, we may assume that as some shift happens in any layer-N, it can only influence the lowest layer-N-1. The higher layers-N+1 are not influenced by isolation, or the remaining reference model may be said to have no impact.
Physical layers
The lowest layer in the OSI model is the Physical Layer; it allows communication between device interface cards and physical media. This layer recognizes and converts electric impulses in bit shape. It handles the actual wire and/or conceptual wireless link between the device interface card and the networking medium; the core network standard illustration involves the RS-232, V.24, and V.35 interfaces.
Link Layer of Data
The Data Connection Layer is the second layer in the OSI Reference Model. Data Connection layer is responsible for Control methods that have proper data format and enable access to physical data flow errors Complete Contact Function
Level N
This is the layer. The data format of the data relation layer is in the form of photos. Therefore, we claim that the data packet is responsible for determining data formats to include the organization from which the information is being transported. Error compliance requirements and other connection control procedures may occur in the physical layer. Like the Cyclical Redundancy Search (CRC), the error checking process that is operating at the time of transmission of the frame from the source hand. The same process will operate on the destination side if a discrepancy has been identified after comparison, so the receiver asks the source to give the frame again.
Regulation of Media Access Control (MAC)
Prevention of accidents or physical treatment. The data packet is further split into two layers, the Logical Link Control (LLC) and the Media Access Control. Logical connection management is responsible for flow control and data error detection. At the same time, the regulation of media usage is liable for managing physical address reconfiguration.
Network Layer
The third layer of the OSI Reference Model is referred to as the Network Layer. This layer is responsible for creating a conceptual link between the source and the destination. The data on this layer would be in the form of packets. The following facilities are given by the network-layer protocols.
Link Mode
The network layer has two forms of communication between the source and the destination; the first is established as an unconnected contact that does not include an acknowledgment of the link. The Internet Protocol is an example of connectionless contact (IP). The second form of link is a connection-oriented connection that gives an acknowledgment of the connection. TCP is a clear example of this relation.
IP address
Per node in computer networks has its own digital Signature. This unique sender and receiver ID Always allow the proper relation. This is attributed to the functionality of the network layer protocol, which includes the source address and the destination address in its header areas. There is also less risk of packet failure, network pollution, and broadcasting.
Presentation Layer
The other layer related to the OSI Reference Model is the Presentation Layer. This layer is responsible for displaying the data transmitted in graphical form. Data compression, as well as decompression, is the key feature of this layer. Data encryption is performed in the presentation layer before it is distributed.
TCP/IP Protocol Suite
The OSI reference model comprises seven layers, although the TCP/IP protocol suite has just four layers. Compared to the OSI reference model, TCP Suite has a high degree of traffic connectivity understanding from sources to destination. The TCP/IP Suite has managed administrative connectivity and efficient data processing. It has hundreds of layer components and a collection of communication rules that provide consistent service efficiency and data security. Each layer in the TCP/IP suite is an accountable communication service, and the related layers, which assisted and were cascaded by each other. The core protocols of this suite are the TCP and UDP protocols that reside in the transport layer. TCP is a known protocol that ensures data transfer stability, while UDP is an unrecognized protocol used in data streaming services such as VOIP.
1.6. A Project Requirement Description
This section aims to provide appropriate details to the Executive Committee of the Company to facilitate the creation, management, hosting, and usage of an online organization to broaden its system. This segment outlines the high-level technological and organizational specifications. It includes details on the functions and duties required to sustain such a structure, including the company’s obligations and the responsibilities of other parties. The paper also provides an expense calculation for the implementation and operation of this form of framework for employees of the organization. It does not provide information about the extension of distance. This functional and technical specifications portion describes the functional, efficiency, protection, and other system criteria defined by the organization’s expanded system development working group as the conceptual information system solution for the organization’s growth.
The content of the online profiles is supposed to replicate the content already contained in the organization’s extended pilot profiles. They would also make the Association Member Entity Members safe access to build and manage their online accounts, including, where possible, the implementation of real-time data validation mechanisms. They can also send alerts to profile holders when time-sensitive details become redundant. Import data from stable government networks needed for online profiles to speed up the profile’s execution and improve the timeliness of the data. Enable member organizations to control user rights for the management of accounts within their organization. It is important to remember that it would enable the organization’s members and non-members to access published profiles and download copies of person profiles and allow the organization’s members to export online details on the organization’s profile via the application software. Interface (API) for the use of their local subcontracting or grant management programs can include evidence for use in negotiations with the federal government to eliminate regulatory pressures and the wise stewardship of federal funds. This work’s nature involves the initial implementation of a web-based framework based on knowledge and input received during the Phase 1 pilot. References to potential planning issues are used in this plan mainly for reference purposes.
Contact Points
Knowledge and Communication
The contact points related to this project are mentioned on the first page of this proposal. If the Executive Committee has accepted the project, this paper will act as a structure outlining the agreed roles and specifications. A representative of each company may be required to sign a declaration recording their organization’s recognition of its positions and obligations.
Functions and Responsibilities
Security experts may act as the lead developer for this process as well as lead on continuing repairs and assistance until such time as either of the parties and/or the corporate executive committee decides to shift duties to another group.
Device Implementation
Working Group will assist security experts in developing and managing this system and offer analysis, input, and acceptance at all levels of development. The administrators of this working group hold the responsibility for the technical management and maintenance of the network infrastructure as outlined in the next portion (Yanakiev and Tagarev 2020, p. 29). The System Development Community System Development Group will act as part of the System Development Group to provide guidance and technological support. The organization will act as the ultimate control body in the form of the Organization Executive Committee to ensure that adequate analysis, assistance, and acceptance is given during the process.
Requirements of Organization Member Organization Subscribers
Organization participants who contribute to the scheme will sign a business usage agreement that agrees to the following: subscribers will plan, send and certify their initial profile within 60 days of their logon id being generated (the subscriber profile will not be publicly viewable before this phase is completed), subscribers promise to provide internal procedures in place to ensure their profile details ar. Subscribers who may not consistently conform to the profile’s maintenance requirements may be removed at the absolute discretion of the company. No subscriber would be suspended without the first possibility of recovery. Pass-through company users agree to use subscriber accounts to receive static/annual details for their subscriptions, and other formats from fellow subscribers do not require the same information. Subscribers commit to engage in uniform sub-awarding during the pilot phase and to supply their data as required by the company. Subscribers that use API consent to update their local data regularly or download a profile for use anytime they want.
Administrative Assistance and Monitoring
The administrators will act as an online network support team to plan the following tasks: creating and managing all required user documents, FAQs, surveys, and reports; supplying a necessary set of values or detailed technical specifications for areas such as database warning components, validation key fields, etc., and updating each profile when uploaded or updated. Administrative help may often track the usage of the network, include user API download times, the pace of user updates and other features to monitor network requirements, retain the knowledge of needs related to improvements in data components, additions, deletions, or adjustments, manage contact between users and provide users with daily updates including status reports on t
Description of the New Framework
There is currently no unified, online electronic archive providing all the details required for transient agencies to carry out risk assessments and continuously track static or annualized data relevant to sub-recipient monitoring. Instead, chosen records are maintained in some federal agency networks, such as the System Award Management or the federal audit network (FAC), with the remaining held by the agencies themselves. Certain data expected under the uniform guidelines to be used for this reason are not yet accessible at the national level to transacting agencies, like copies of a-133/single audits or federal management decisions. This shortage of details and the dispersion of data caused research agencies to create data collection records that were used by each other at the point of sub-awarding or upgrading. The multitude of ways, together with the plurality of organizations gathering data on a per-sub-award basis instead of on a per entity basis, has culminated in substantial logistical pressures without gaining a risk control viewpoint in proportion.
The organization’s extended network step 1 pilot set up a framework through which each pilot agency must have a standard dataset and responses to questions mostly in an organization profile. These person profiles are currently stored in excel and translated to a consolidated website repository in pdf format. Data on administrative stress relief is being compiled and will be published periodically starting in the fall of 2016; early data show that relief will be substantial. However, the new procedure depends on minimal data validation options on the profile itself and extensive manual analysis, authorization, and loading of documentation by various persons participating in the process (institutional members, an organization-wide network of volunteers, and organizational staff). In addition, there is no data download required for usage in local networks. These findings, along with the incredible performance of the initial organization’s financial conflict of interest network, which gradually opened out to non-organizational participants and now comprises more than 1,000 organizations, meant the need to shift towards a more automated and electronically robust mechanism for the longer term. This plan represents the attempts of the company to develop an electronically robust infrastructure.
Quality Assurance (QA) Plan for Solution
Effective businesses have discovered a way to sell products that customers like, for a price they are able to spend, in a way that allows profits out of the sale. Highly competitive firms provide premium goods and services in this trade and hold the quality good, such that the consumer returns the next time he/she wishes to buy it. Quality has been described as: ‘The entirety of properties and capabilities of the product or service based on its capacity to fulfill the specified or implicit needs (Yanakiev and Tagarev 2020, p. 29). Not to be confused for the degree of Core Perfection or fitness for usage that satisfies only part of the meaning. By this definition, protection is a quality aspect. The American Quality Culture has provided the following quality statements: consistency is not a program; it is a company strategy; consistency is a set of effective methods and ideas that have been shown to function; consistency tools and strategies are essential to any area of the market.
Quality improves consumer loyalty, decreases turnaround time and prices, and removes mistakes and rework. Quality and financial outcomes are the inevitable outcomes of good quality control. Security is described by (Horton S., 2020, p. 45) as: 1) Free from risk; safety; 2) Freedom from suspicion, apprehension or fear; trust; 3) Security-proofing or security-proofing, such as: (a) Group or private guards; (b) Government measures to deter spying, intrusion, or attack; (c) Measures taken as company or homeowner; Both concepts, taken together, will illustrate that consistency is the duty of the whole organization; and protection is part of the overall quality of the system, inherent in the preferences of the consumer. Value can be part of how business is done (Singhal 2011, p. 178). This covers the standard of apps, but it involves all the company does and the security of its properties, both tangible and interactive. As a quality variable, protection must be discussed in the organization, in the concept of strategy, the formulation of policy, and the execution and control of both.
There exists a strong correlation between business success as well as disciplined quality Fundamentals in management. Strengthening defense will improve this success. Applying the methods, principles, and strategies that have been shown to function in QA to protection could result in improved consumer loyalty, decreased rework as well as increased performance. (a) discussed one element of enhancing product consistency in its SANS Qualification Activity. She claims that improvements in the quality of the software will, in turn, minimize the risk of incorporating vulnerabilities into the system addresses how not only specific developers should fix security concerns in software but also how teams should work together to address them through code tests, peer-to-peer coding, as well as using autonomous research teams.
Yet, the desire for quality plays in an enterprise is broader than just the production of apps. Protection is not restricted to application applications. Both efficiency and protection must be ubiquitous in the company to reach a true depth of the effect. Protection and efficiency can be established based on the launch of the programs (Stellflue 2020, p. 67). Much like businesses are striving to ‘test consistency in,’ so do companies see defense as the only item that the organization got together and bolted on. And therefore, it is typically the single messiest, ugliest, most user-unfriendly aspect of our processes the only solution-the one we can’t manage, of course, would be to restore all of our IT software, software, works, technology planned and installed to the heart (Singhal 2011, p. 178). This can involve tactics, regulations, corporate structure, preparation, etc. Using the Continuous. The improvement as a comparison point, we will see several examples of how quality tools and strategies can help protection practitioners enhance protection through individuals, procedures, and technology.
People
Organizational understanding of protection may be a crucial success factor in building its security defenses. People are subverting protection systems because of human behavior, not because they try to be evil. The consumer opens an email that states, YOU BEEN ATTACKED, and automatically wants to access it. They get an executable that lets them see dancing gophers because, of course, they want to see it (Stellflue 2020, p. 67). They see somebody at the entrance, with their hands full of pizza and beer, and they cannot get in; they want to help because they are going to unlock the door – and let them through. This is particularly valid when someone in power (or pretending to be in authority) or someone unhappy at them is requested to do anything that does not seem quite right. The help desk worker might be persuaded to send away a secret over the phone if they assume it is the big boss at the other end.
There are instances of individuals pursuing an inherent interest, common decency, and a sense of self-preservation. A clear knowledge of protection, coupled with a sound strategy, would provide them with the resources to cope effectively with any of these scenarios. Security experts will include the content needed to train consumers (Singhal 2011, p. 178). Quality practitioners will relate the material to the interests of the company as a whole and define the best strategies for getting the message out. They will also help to assess the efficacy of the course – encouraging technical security experts to focus their resources when they are most important. People ought to work together to get protection work completed. No one can protect a company. QA practitioners, together with other departments within an organization, such as human resources, have training in identifying appropriate processes within which work may be performed – including reporting roles, obligations, and authority (Horton 2020, p. 67). For example, QA may assist Key in identifying security organizational frameworks that bring all aspects of security under the responsibility of a single senior executive with numerous divisions of security. Alternatively, for a particular entity and specific cultural requirements, there may be a division of responsibilities with protection checks and balances. Using guidance from the security personnel, philosophy, and strategic vision of the organization, a framework that suits the objectives of each community may be placed in place.
Process
QA and quality control involve systematic preparation and execution of quality strategies in the enterprise, which may provide protective policies in respect of security personnel. QA may claim that defense is an important part of the corporate plan at the highest level of the enterprise. A strong protection framework should be published with the corporate approach known (Stellflue 2020, p. 67). Examining and articulating compliance policy at the corporate level forces executives in the Main Company to operate towards the daunting challenge of defining which APTs are appropriate and which are not. This involves distinguishing certain items that can only be approved by an enterprise and those that can be accepted by a single business entity or agency on behalf of the organization. Understanding that the company cannot provide both 100 percent protection and 100 percent functionality, cost versus reward can be taken into consideration in decisions regarding specific communication. Appropriate danger also includes a discussion of risk reduction practices and alternatives. Security experts should be willing to negotiate at an acceptable level with company administrators what sort of risks occur as networks are linked to ‘unknown’ networks, like the Internet. Both variables can help to determine what the danger is and how it can be mitigated. The way the probability may be minimized contributes to consideration about what procedures might be in operation.
Protection is a mechanism and not a product. A process can be described as a series of related or interactive activities that turn inputs into outputs. Processes describe what must be achieved, with who, and when. They are engaging in unique practices, likely utilizing technologies, and together they are beginning to build walls that function as protection for an enterprise. Well-documented protection protocols, as well as procedures, enable individuals to take the appropriate steps without fear of reprisal. Even in the absence of well-documented protocols (though it is often preferable), systems may be structured to enhance protection. QA experts may often aid in recording the protocols needed for the design, execution, promotion, and tracking of protection measures. Working together with security professionals, they will help track and identify the activities required by device employees, customers, and business citizens to support these processes and procedures on a day-to-day basis (Sharp and Lang 2018, p. 45). As legislation, procedures may be prone to change as systems change, as organizations reorganize, or when a new framework is adopted; however, it enables the policy to function as a meta-document and offer guidance. Regulation should stay wider and higher, whereas protocols are utilized to address who performs what, where. Closely-followed and well-documented protocols can define liability and include an audit trail at a point of a possible violation.
Since protection technologies and vulnerabilities are evolving too rapidly, protocols need to be designed with certain versatility to enable work to be completed quickly in reaction to the company’s danger. Notifications of emergency shifts, for example, cannot be set so rigidly that sufficient preventive or sensitive action is not taken since the one right person could not be reached. In situations like this, a mixture of protocols and policies can enable the individual with the experience to do the correct thing without fear. Executives are expected to make choices on a regular basis with insufficient knowledge and large potential costs. Safety experts would do the same to be trusted to make the correct decision based on business policies.
Technology
The technical emphasis specifically belongs to the security specialist in a joint attempt to improve security in an organization. The development of procedures and strategies with a consistent technological security base would help to accomplish the aim of enhanced security. However, there are ways for QA to help their reliable, repeatable handling through the usage of technology by individuals. The following are samples of best practices in the field of technological protection whose Main can be provided by QA techniques. To protect the security of the network and of the applications and information on the network, steps must be taken both to avoid attacks and to identify attacks as they arise.
Cybersecurity analysts and engineers use various methods to track, record, and evaluate data streaming through the network. One approach to view the findings is to equate what they see with the usual actions of the IP. In certain instances, network and protection engineers will often build software or scripts to be used to understand the results. There are applications, such as TCPDUMP as well as NMAP, that can include substantial details about what is occurring on the infrastructure and on the networks. Nevertheless, the usage of these methods cannot be compatible with one engineer and the next. The understanding of the tools will often differ from individual to person. Custom instruments that are used can not necessarily generate the intended findings due to basic reasoning errors, glitches, or misconfiguration. In many situations, the results are viewed based on what the programmer feels he or she is observing, rather than being measured against a baseline.
There are some of the areas QA has familiarity in measurement and calculating strategies. Basic areas in which QA may aid include: use strong monitoring methods and verify research software and guarantee the validity of the results, as well as support across the network and their processes to confirm that they are clean. QA will log what is inspected, what is observed, and what needs to be re-examined in the future, utilizing what tools. This would further set the foundations for analysis if a break-in happens (Singhal 2011, p. 179). One of the four main QAI strategies for their strategic phase is to ‘manage by reality.’ If the evidence does not support the argument that there has really been an improvement, the point is far more complicated to render when it comes to the police officers, or the FBI, or even the government. Help describe the method, not the methods used to do the analysis, namely who performs it, when it is performed (how often), what is reported, and how it is published.
Assuming the position of process auditor. This will maintain transparency. Set up protocols to support a research center where intrusions and infiltration methods may be checked. These procedures can help to minimize the probability that no attack can escape into the manufacturing environment. Establish a procedure for receiving permissions prior to testing or checking network assaults. Implement systems and protocols, streamlined when appropriate, for continuous scanning and mapping of known APTs and potential exposures. It is necessary to remember that the individual conducting QA activities and protection activities can be the same person. Throughout this article, it is believed that the methods utilized by each discipline can be complementary to the other. It is likely that educating protection practitioners in QA practices and methodologies will be a valuable way to start the process. It should be necessary for QA practitioners to consider the safety basics to ensure that they do what they can. Much of what goes into defense is not complex but requires discipline and caution. Maintaining the consistency of every sort involves the same discipline (Singhal 2011, p. 180). The introduction of protocols that facilitate the efficiency and protection of technology applications builds this diligence into the overall job structure.
As the Internet continues to expand, as businesses provide greater electronic access to each other, as policymakers work to secure the privacy of Internet users and customers more specifically identify what they expect from web interaction, the efficiency and protection of the overall networks need to adapt to these needs. As a closing illustration, this section illustrates how organizations are continuing to identify safety standards at their partner sites as well as for themselves. Organizations should seek to handle the confidentiality of consumer data from end to end. Part of their ‘simple guidelines’ is the mandate that traders’ regularly monitor protection processes.’ Having sound procedures and equipment to conduct these checks and having overall quality controls would bring firms ahead of the pack when it comes to common standards, government oversight, peer criteria, and consumer preferences. QA and safety engineers will establish a strategic partnership to endorse a business plan. With protection and efficiency as part of the way an organization conducts business, core focus can be given to fulfilling the desires and demands of its clients in a profitable way.
Overview of Requirements
An organization-wide network includes a technology-based approach for a community-wide data collection and management framework whose primary functions include: data accessibility; individual profile management, operational management council, linkages to relevant systems/external enforcement databases; user identities, data integrity, and system protection.
Functional Requirements
To satisfy the above-articulated requirements, the organization’s extended network involves a community-wide data collection and management infrastructure that incorporates the following specific functionality: Availability of data, Details on the organization profile is freely accessible via a searchable database; Profile data are available for direct system-to-system access through HTTP API; he device can log the date and period of the last API data to be drawn; profile information for specific organizations can be accessed in excel or PDF format, and management of the entity profile.
The organization’s extended web-based network infrastructure would need an initial team of developers and testers. Both functions will be carried out by the EC-SDWG on a voluntary basis. The project would include financial, project management, and preparation activities to be undertaken by the ECWG (Sharp and Lang 2018, p. 45). To meet the continuing demands of this method, the company may need a supervisory committee to monitor and maintain the system and group data and ensure the continuous efficiency and credibility of the system (ECWG). Once installed, this framework is supposed to operate on a continuous basis, with at least an annual evaluation mechanism set up to assess its efficacy, requirements for enhancements or improvements, the possible need for removal, or other circumstances. Entities may have access to data and documentation, although the dissemination of community-wide information will be approved by the Executive Committee of the Association.
Organizational effects of organization
In the expectation that the expanded web-based enterprise will gradually eliminate the numerous data collection elements presently managed by or by the organization, we anticipate the ultimate long-term effect of the organization to streamline the form of data currently collected on the organization’s website. We expect potential time saves for organizational personnel and stakeholders since there would be one combined and unified location to hold what is currently in place: a company member institution profile, a-133 database, an FCOI network, and an entity extended excel/pdf network archive. Users can communicate with the device through the network in real-time (Horton 2020, p. 45). Organization participants would be required to obtain and retain a safe and stable internet link sufficient to enable the entry of data by their employees. Device problems in general, data entry, monitoring, and usage can be supported by an organization-wide network working group. Organizational pilot entities are necessary to change their existing sub-recipient agency forms and internal procedures to meet the planned process of data collection entry, including timeliness. Both user details, guidance, and FAQ’s will be established by the administrators and EC-SDWG and stored on the organization’s website. Both pilot organizations would have access to the system’s guidance and training documents, including FAQ’s on the system. The organization’s extended network will be retained in the existing excel spreadsheet/PDF registry mode until such time as the web-based infrastructure has been created, validated, and completely implemented. Pilot agencies will be expected to assist in the transformation of strategies and programs.
Organization of improved network infrastructure implementation impacts of the working group. The Organizational Extended Network System Implementation Working Group will devote resources and operate together to build and manage the network system in the following ways: contact with and between-group participants, frequent phone calls for progress analysis and clarification of open issues, schedule, and feedback updates for the initiative, and to keep them up to date (Klepper 2020, p. 69). The framework will implement a variety of data QA procedures, include but are not restricted to: input masks, drop-down lists of common answers, record data completeness criteria, simple data logic alerts (e.g., gender: male with pregnancy status: y), and manual analysis and confirmation of new draft person profiles by the appointed organizational administrator prior to the addition of profiles.
Timing and Power
The device is planned to be accessible online throughout the year, except for scheduled and pre-notified software configuration downtimes, if necessary. Data would be automatically eligible for usage, except for new accounts, which will be waiting in the queue for validation by the corporate administrator (Klepper 2020, p. 69). The ECSDWG would ensure that device services are sufficient for timely response times and the program’s overall quality. ECSDWG can examine the ISP/hosting provider’s choices and, until the initial implementation is complete, switch the network to this for hosting the device. The device would be designed and checked on Organization operating systems and then moved based on an ECSDWG arrangement. The expense to the ISP/hosting company is expected to be about $1,200 a year. The expense of this would be incurred by the company.
Failure of Contingencies
The method is not critical. Temporary inaccessibility, except for a few days, would not impose a huge strain on any person. The host location for the device would be selected to provide data backup features and protocols. Security experts can retain a copy of the Organization network’s code, which has backup protocols regularly (Klepper 2020, p. 69). If the ECSDWG feels that it is wise, a copy may be stored at the institution or another backup location. With the usage of an ISP/hosting service, it is assumed that downtime would be limited or non-existent.
Additional Device Specifications
The planned organization’s extended network infrastructure would consist of a web-based, consolidated database organization profile and reporting that would be used to facilitate ongoing sub-recipient agency tracking of organizations’ operations and responsibilities. Generally, all users can provide direct feedback to the system, and outputs (reports) will also be produced directly from the system. However, versatility is often needed for both input and output modes to ensure growth capacity. Participating pilot agencies will provide feedback (i.e., agency level data), and ECSDWG, as an association agent, will provide device management and help for reporting generation (Stellflue 2020, p. 67). The device is initially intended to be built mainly by workers at the organization, in close collaboration with the ECSDWG. As possible and accepted, Security scientists may both be interested in the production of materials.
1.7. Data Available
1.8. The Industry Standard Methodology for Designing and Development
SCRUM is the industry-standard methodology selected for designing and developing a cybersecurity plan for the security project. Scrum is a system that utilizes limited period boxes of one month or fewer, named Sprint (Klepper, 2020). When the Sprint stops, the next Sprint will commence before the product is completed. Scrum’s squad can function through Scrum Activities to achieve Scrum Objects. The Scrum Team is comprised of Product Holders, Scrum Master and Production Team. The product owner is the one responsible for the results of the production (Stellflue 2020, p. 67). Scrum Master, on the other hand, is responsible for ensuring that the Scrum Squad performs efficiently and interacts with partners outside the unit. Scrum Master would not have management power. The Production Team is often a community of individuals with various technical skills collaborating on device development.
To ensure that all facets of the operation are accessible to all Scrum Team participants, Scrum Objects are built and updated at each point of the process. Three typical objects in Scrum are Product Backlog, Sprint Backlog, and Burn down Map (Pohl and Hof 2015, p. 89). Product Backlog is an ordered collection of the specifications, roles, and functionality of the product. Product Backlog is generated from the early stage by the Product Owner and can be modified through the production period (Klepper 2020). A collection of chosen products would be Sprint Backlog from the Commodity Backlog. The Sprint Backlog may be viewed as the target of the Sprint. After a mission or a Sprint has been accomplished, the success will be reported in the Burn down Table (Klepper 2020). The Scrum Team would have to work through multiple activities before all Sprint Backlog tasks have been accomplished. Usually, a Sprint has four events: Sprint Preparation, Sprint Analysis, Daily Scrum, and the Sprint Retrospective. Sprint Preparation is where the Product Creator, Scrum Master, and Production Team build the Sprint Backlog and how they will get the job completed. The Planning Team meetings regularly, facilitated by the Scrum Master. In Regular Scrum, team members report what they achieved the day before, their challenges, and what they are planning to accomplish every day. The Sprint Analysis is conducted at the conclusion of the Sprint to show the increments of the working element. Before beginning a new Sprint, Sprint Retrospective is a gathering for the Scrum Master and Production Team to inspect themselves and establish an improvement plan for the next Sprint.
There are many methodologies for incorporating protection into the Scrum system. Veracode recommended two ways: The Protection Sprint Method and the Every Sprint Approach. Safety consumer stories are used with a Sprint in the Every-Sprint strategy. The biggest issue with this approach is the expensive necessity of a protection specialist on the Scrum production team. Security user stories are evaluated and created in a different Sprint in the Security Sprint methodology. This strategy could slow the production phase. (a) suggested a stable edition of Scrum, dubbed S-Scrum. They attached Spikes to Scrum Events in Scrum Phase. This edition also modified the simple form of the S-Scrum system. One of the lightest and easiest strategies is Stable Scrum. Secure Scrum is the Scrum methodology variation with a special focus on developing secure software across the development of software. Safe Scrum has four elements: identification, execution, authentication, and concept. These four components would be combined with six Scrum elements to improve the protection of software creation. The center of Safe Scrum is the usage of S-Tag and S-Mark.
The Identifying Aspect recognizes security issues from the customer stories of stakeholders. The safety-relevant user stories are then regularly ranked by their risk but also labeled in the Product Backlog. It is worth noting that the marker is named S-Mark, and maybe a sticker, a dot, or a color context. A collection of action items is generated based on customer stories and meetings. The security issue is defined in the S-Tag. The S-Tag could be used with one or more of the Commodity Backlog Products. Several inventory backlog products are similar in more than one aspect in terms of their protection issues. Since Product Backlog Items (PBIs) can shift over time to modify project execution, Sprint S-tags can be updated when referring to Product Backlog Items and when offering Sprint Preparation for the next object.
As a motivator, the Implementation component comes a long way in ensuring the
development team security awareness. Except when using the S-Tag, it is important to use proper language in your User Story and impact statements. In some cases, user tags can become divided into different roles. Star compliance requires an S-Mark to be put on the task (Klepper 2020, p. 69). The Verification Component is a task that is being performed. It ensures that the system team member has reviewed the security testing of the S-Mark tasks. The requirement of a signature is a part of the definition of done. As an important part of Daily Scrum, verification is managed within the meeting. In some cases, the team members do not know enough about the security issues and fundamentals of a product they are working on, so they will need to acquire and seek external assistance. Since the task cannot be implemented, the project has determined a new task for verifying the task. When a new task is created, all S-Tags will immediately be applied to the new task. External resources can help the Scrum Team solve challenges, enhancing their knowledge, providing a more external view in this task. The Definition of Done component requires that the security that the company has implemented must be double-checked and validated by either internal or external resources.
Secure Scrum Evaluation
The Secure Scrum method incites that Scrum adds another layer of secure communication to a project, such as how the quality products are designed such as Requirements, User Stories, etc. The first part of the system’s advantage is that it can increase the security awareness of team members in the course of the development process. Also, IT security experts on your development team can help you minimize your overall spending by cutting down on the number of contractors and consultants. The method also provides a way for the external security resources to be deployed when necessary (Klepper 2020, p. 69). On the one hand, I agree that the safe Scrum solves two issues related to integrating security. First, lack of security knowledge and lack of security awareness. Second, prioritizing tasks. However, in the process of identifying security issues, there is often an absence of a Principal Security Engineer. For security purposes, people might not have migrated to an alternate account until the verification phase. It is important to steady who may be late or constantly present in the security tasks. This may cause many problems as the security task may be done wrong.
The Dynamic Systems Development Method (DSDM).
There are several DSDM agile development frameworks on the market that deliver the right solution at the right time for the most effective result (Arthur and Dabney 2017). Resilience analysis bases on Rapid Application Development (RAD). The implementation of DSDM work towards minimizing the time needed to develop the system. The management view can be broken down into many subcategories, the business view must be looked at, the technical view needs to be analyzed, and the progress view can be looked over all at once. It is possible to be in one or more roles. Based on the latest iteration of the DSDM Agile Framework (Klepper 2020), project roles are structured by category according to the following pattern: Solution Development Team Roles, the Project-level Roles, and the respective Supporting Roles.
Project managers are responsible for understanding the direction of the project. They are also assigned with the proper strategic planning projects, developing processes, and creating manual documents to guide certain measurements such as capacity tests (Abdelkebir, Maleh and Belaissaoui 2017, p. 5). The solution development team is filled with various people in the fields of technological development, business administration, accounting, marketing, biology, chemistry, civil engineering, and healthcare. They are responsible for developing the system based on the procedures established by the project-level roles. Some people on the committee do not need to be aware of the project. They develop the concepts, develop the content, promote the workshop, and lead the discussions. They facilitate the Solution Development Roles when it is necessary.
• The orange represents the business stance on regulation.
• Green Texts: Roles representing the technical/solution view
• The views of management and leadership are represented in blue.
• Grey: (A.) Roles that represent the collection view.
In the next two stages (the Functional Model and the Design and Build phases), there are eleven activities in the process: Identify Prototype, Agree Plan, Create Prototype, Ship Prototype, Test Prototype, Review Prototype, Reuse Prototype, Prototyping, Refinement, Request for Proposal, and Final. The Functional Model Iteration emphasizes the development of the standards of analysis for consumer products and product components. At the end of Functional Model Iteration results in the production of a functional prototype. The Design and Build Iteration ensures that the system has been implemented sufficiently high standards and meets the project’s requirements. The system built-in iteration one was the test system in iteration two (Abdelkebir, Maleh, and Belaissaoui 2017, p. 4). This is the initial phase where we must adapt from our early development models to an operating system. This process includes several activities: training users, providing a user guideline, receiving approval of a user, and reviewing a business. The product of the implementation phase is the delivered system and the increment review document. As the project requirements are very different depending on the project features, the Functional Model Iteration, Design and build iteration, and Implementation phases can be merged or rearranged, overlapping or joined.
The proposed methodology is a way to build a trusted system that does not rely on the integrity of source code. It maintains security by creating an external, trusted application. As with other agile development methods, the Dynamic System Development Method does not pay enough attention to how secure their software is. The research revolves around integrating security into DSDM revolves around software security issues, which exist in agile practices. They include the DSDM, which emphasizes security lacking in DSDM. Supporting the meta-analysis findings, a discussion on the topic was found in just one forum, which shows the prevalent level of security required by dynamic systems development principles.
One suggested technique is to implement the SQUARE framework (Security Quality Requirements Engineering) into the Dynamic Systems Development Methodology Framework (DSDMF) (Mead, Viswanathan, & Zhan, 2008). The notion of writing security requirements for IT development is SQUARE (System Quality Requirements Engineering) that incorporates security considerations into the development process. SQUARE has nine steps since Arthur and Dabney (2017) decided to combine nine steps into a study of Business Study and DSM-related Functional Model Iteration.
DSDM Phase- Square Step.
Security problems are initially addressed in the general iteration of the practical model. The identification of a working prototype and the identification of safety issues was carried out concurrently. Then two stages – a stable modeling step and a safe, functional model – are applied to the production process. The Stable Functional Model Process is performed after the iteration of the functional model, and the Secure Architecture is performed after the Design and Build phases. Essentially, the Safe Concept process and the Secure Functional Model Iteration phase are the Functional Model Iteration and Develop and Construct Iteration stages that concentrate on protection operations (Stellflue 2020, p. 67). These two new steps have the same framework of four sub-phases: Define the prototype, agree the design, build the prototype, and revisit the prototype. Since operating a stable system involves technical expertise, user training courses are actively recommended throughout the deployment process. Users can check the security of the system to help developers identify security problems while utilizing the system. Evaluation of Incorporation of SQUARE steps in the DSDM process and Secure Dynamic System Development Method: SQUARE is a method model for creating, categorizing, and prioritizing protection criteria for IT systems and applications. Through integrating SQUARE into DSDM, the protection criteria are completely incorporated into the Market Analysis. In addition, the protection criteria are updated in the iteration of the practical model. Because of this approach, the safety criteria have been thoroughly established. Unfortunately, safety standards cannot guarantee the security tasks are carried out later.
In comparison to the Incorporation of SQUARE steps in the DSDM process, the Stable Dynamic System Creation Approach applies security-relevant elements to the later phases of DSDM. The Stable Dynamic Device Architecture Approach would not discuss information on the study of security problems. With two new Secure Functional Model stages and Secure Design, encryption codes are designed and evaluated throughout the production period. The Stable Dynamic Device Construction Approach’s creative aspects are protection preparation and safety analysis during the deployment process (Moyon et al., 2018, p. 32). Because protection checks cannot be carried out in a limited period of iteration, users’ support during the development phase is a solution to detect errors and increase product safety. Incorporating SQUARE steps into the DSDM can be combined to build a security integration structure for DSDM. Second, protection specifications would be established during the market analysis process and updated in the Feature Model Iteration. The protection tasks would then be designed, checked, and evaluated by the Stable Dynamic Machine Creation Process. Extreme Programming is a lightweight approach that designs applications in the face of unclear or quickly evolving specifications focused on solving software creation shortcomings (Arthur and Dabney 2017). There are various positions in the Extreme Programming team: programmer, user, tester, tracker, mentor, consultant, and big boss (Klepper 2020). In these positions, the programmer, the client, the coach, and the tracker must assume roles in the project. An individual may have more than one position, but they need to know which hat they are wearing.
The programmer is an individual who deals specifically with the software. They will be the only technicians on the squad. In Intense Programming, the programmer would interact with other individuals, including technicians and business people. The client is the one who decides what to program. Unlike consumers of other frameworks, Intense Programming customers need to develop writing tales, practical checking, and decision-making skills. The tracker is the team’s conscience. They track the progress of work and offer input. The coach is the one accountable for the method. The instructor can direct the team to perform together but maintain that the team is free to work individually. Twelve big practices occur in Intense Programming (Arthur and Dabney 2017). The Preparation Game: The nature of the next update is calculated by integrating market criteria (project scope, the importance of work products, the composition of launches and release dates) and technological requirements (Time estimation, technical consequences, process, and detailed scheduling). Limited releases: each update should be as small as possible, and the duration should be as quick as possible. Metaphor: Metaphor is a guideline for device growth. It reveals how the whole machine functions. Easy Architecture: The device design should be as simple as possible. The specification should run all the experiments, have no duplicated reasoning, state all the essential intentions, and have a few classes and methods as possible. Testing: unit test by programmers and practical test by the user are needed by each software functionality
How SCRUM Promotes Automation in Cybersecurity, Improves and Modernizes Security Assurance
Secure Scrum helps non-safety professionals to define protection problems, introduce security features, and check applications. The Stable Scrum field test reveals that the degree of protection of software produced using Secure Scrum is higher than the security level of software developed using Scrum norm. Scrum groups engineers in a limited development team that have a certain latitude to create applications. It is believed that all developers will handle all the tasks at hand. Technology has been increasingly so-called built-in sprints. The jog is a set period (between 2 and 4 weeks) (Abdelkebir, Maleh, and Belaissaoui 2017, p. 7). During a sprint, the team introduces an increase in the program’s existing iteration, usually involving a given amount of new functionality or features identified as user stories. User stories are used in Scrum to record the specifications of the app project. Both customer stories are placed in the Backlog of Items.
During the sprint development process, customer stories from the Product Backlog are split into assignments. The assignments are stored in the Sprint Backlog. The so-called Device Owner is the sole point of contact between the client and the developer team. The Device Owner, therefore, prioritizes the functions to be introduced. Regular Scrum does not have any safety-specific pieces (Pohl and Hof 2015, p. 45). One of the main drivers of protection software in Safe Scrum is recognizing the related security components of the software project. Health significance is then rendered clear to all participants of the team at all times. This strategy is known to improve the degree of protection when developers rely on items they have measured themselves, which they completely appreciate. Their prioritization of criteria does not vary from the prioritization of others.
Secure Scrum seeks to maintain an acceptable degree of protection for a single software project. The word “appropriate” has been selected to prevent costly IT protection engineering in software projects. The concept of an acceptable degree of protection is a key point in creating resource-efficient applications (e.g., time and money are important resources during software development). Safe Scrum depends on the concept for the definition of an acceptable degree of security: software must be protected before it is no longer profitable for an attacker to locate and manipulate a flaw (Moyon et al., 2018, p. 32). It implies that an acceptable degree of protection is achieved once the expense of exploiting the flaw is greater than the potential value of the exploit. Secure Scrum thus provides a way not only to define the protection of relevant sections of the project but also to judge the attractiveness of attack vectors in terms of ease of usage. Linked to the detection of security concerns, it is important to build functionality to avoid future security threats. In Scrum, each part of the team is accountable for the solution’s completeness (Definition of Done). There are, moreover, a large range of options of methodologies required to validate completeness. It ensures that a team member can utilize every form of authentication (similar to the normal tests, the Scrum methodology does not tell the developer how to test). Thus, secure Scrum lets developers define acceptable vulnerability checking tools to secure the related sections of a software project.
One last problem overcome by Safe Scrum is the capacity to recognize what it’s required. Safe Scrum believes that the squad itself could handle the overwhelming majority of requirements to retain all of Scrum’s advantages. However, for certain protection-related problems, the use of external services such as security experts in the project might be necessary or more cost-effective (Moyon et al., 2018, p. 32). It is also important to note that Scrum provides a way to incorporate these external resources within the project without breaking Scrum’s characteristics and very little overhead in administration.
1.9. Deliverables Associated with Design and Development of Technology Solution
The project plan is to provide a robust framework for identifying, analyzing, and mitigating cyber-attacks in large enterprises by leveraging cloud computing technology and conventional security governance frameworks. Through this focus, the framework virtually contains zero-day attacks, enabling firms to secure enterprise information infrastructure. The project’s scope is to conduct research on the effectiveness of cloud-based solutions against these types of attacks, identify these solutions’ performance, and provide improvement considerations to optimize efficiency and resilience. The project’s goals are to develop a robust security framework for enterprise information infrastructure and integrate robust security management strategies into organizational architecture. The project’s objectives are to build the APT management platform, test its efficiency and resilience, and effectively incorporate it into the corporate environment.
Besides known advances in security system technologies and encryption technology, the cloud-based solutions to virtualize cybersecurity will be necessary to gain an advantage against cybercriminals. As increased assets become connected to the web, the need to vastly restore information and create virtual honeypots to analyze the adversary’s zero exploits will be critical. Virtualization provides options such as a kill switch to completely shut down a server and restore it instantaneously elsewhere to continue operations. This framework will integrate encryption schemes and security system technologies in traditional security architecture to offer a robust APT analysis platform (Kumar & Goyal, 2019). Through this approach, the proposed solution will effectively manage these APTs to thwart state-sponsored cyber-attacks.
1.10. Implementation Strategy and Expected Outcome
The implementation plan comprises four main steps that ensure effective management of the corporate environment’s security APIs. During the first stage, the behavior of conventional Chinese APTs will be investigated to provide insights into these APTs’ performance. This process will involve examining the behavior of these attacks on the virtualized environment. The second phase is building the attack profile of these APTs to develop the mitigation framework. At this stage, the APTs’ behavior will be analyzed and documented, providing a robust management framework. Further, risk management practices will be developed in this stage to ensure effective management of the attacks. The behavior of these threat containment measures will also be investigated to enhance understanding of the security frameworks. The third phase is deploying the cloud-based security architecture to manage APT (Sharp and Lang 2018, p. 50). At this stage, the intelligence obtained from the APTs and performance of the conventional security solutions will be integrated into the cloud infrastructure, providing automated threat modeling and management. The fourth phase is testing the performance of these security solutions in the cloud infrastructure and enterprise IT architecture’s resilience against these attacks. During this stage, the solution’s performance will extensively be examined to ascertain its generalization and applicability in large corporations. Any performance issues identified during this stage will be resolved and documented.
There are four anticipated project outcomes. First, the developed solution is expected to identify and mitigate APT in real-time. This phenomenon implies that the architecture offers real-time monitoring of the enterprise infrastructure to identify malware and breaches and contain their impacts on the corporate systems. Secondly, it is expected that the solution will document and notify the system administrators regarding these breaches in real-time (Javidi and Sheybani 2018, p. 69). Logging and notification of APTs optimize system auditing, enabling administrators to enforce compliance across the enterprise environment. Through this functionality, respective agencies will optimize preparedness and responsiveness to security incidents. Thirdly, it is expected that the system will effectively filter malware, preventing it from executing in the live enterprise environment. The developed solution will identify and redirect malware and its relevant processes to the warm site for observation and analysis, safeguarding the live IT infrastructure.
Lastly, it is expected that the developed solution will adapt to organizational needs and requirements, providing robust scalability and customization in security governance. Through these features, firms and agencies will enhance the management of state-sponsored cyber-attacks. There are three major deliverables for the project. First, there will be a cloud infrastructure that provides a security solution and management framework (Sharp and Lang 2018, p. 48). The configurations and copies of the relevant files will be provided as the deliverables in this requirement. Secondly, the framework’s documentation, including the modifications and rationale, will be provided in the project. Providing these resources enhances understanding of the platform and its integration into the corporate environment. Lastly, user manuals and technical reference documents will be provided to enhance the solution’s implementation and management. Thus, these deliverables improve access and control of the solution.
1.11. QA Plan for Solution
Effective businesses have discovered a way to sell products that customers like, for a price they are able to spend, in a way that allows profits out of the sale. Highly competitive firms provide premium goods and services in this trade and hold the quality good, such that the consumer returns the next time he/she wishes to buy it. Quality has been described as: ‘The entirety of properties and capabilities of the product or service based on its capacity to fulfill the specified or implicit needs. Not to be confused for degree of Core Perfection or fitness for usage that satisfies only part of the meaning. By this definition, protection is a quality aspect. The American Quality Culture has provided the following quality statements:
• Consistency is not a program; it is a company strategy.
• Consistency is a set of effective methods and ideas that have been shown to function.
• Consistency tools and strategies are essential to any area of the market.
Quality improves consumer loyalty, decreases turnaround time and prices, and removes mistakes and rework. Quality and financial outcomes are the inevitable outcomes of good quality control. Security is described by Harris (2019, p. 45) as: 1) Free from risk; safety; 2) Freedom from suspicion, apprehension or fear; trust; 3) Security-proofing or security-proofing, such as: (a) Group or private guards; (b) Government measures to deter spying, intrusion, or attack; (c) Measures taken as company or homeowner; Both concepts, taken together, will illustrate that 1) consistency is the duty of the whole organization, and 2) protection is part of the overall quality of the system, inherent in the preferences of the consumer. Value can be part of how business is done. It covers the standard of apps, but it involves all the company does and the security of its properties, both tangible and interactive. As a quality variable, protection must be discussed in the organization, in the concept of strategy, policy formulation, and the execution and control of both.
In general, there is a strong correlation between business success and disciplined quality Fundamentals in management. Strengthening defense will improve this success. Applying the methods, principles, and strategies that have been shown to function in QA to protection could result in improved consumer loyalty, decreased rework, and increased performance. (a) discussed one element of enhancing product consistency in its SANS Qualification Activity. She claims that improvements in the quality of the software will, in turn, minimize the risk of incorporating vulnerabilities into the systems. She addresses how specific developers should fix security concerns in software and how teams should work together to address them through code tests, peer-to-peer coding, and autonomous research teams.
Yet, the desire for quality plays in an enterprise is broader than just the production of apps. Protection is not restricted to application applications. Both efficiency and protection must be ubiquitous in the company to reach a true depth of the effect. Protection and efficiency can be established based on the launch of the programs. Much like businesses are striving to ‘test consistency in so do companies see defense as the only item that we have together and bolted on. And therefore, it’s typically the single messiest, ugliest, most user-unfriendly aspect of our processes the only solution-the one we can’t manage, of course, would be to restore all of our IT software, software, works, technology planned and installed to the core (Sharp and Lang 2018, p. 45). It can involve tactics, regulations, corporate structure, preparation, etc. Using Continuous Improvement as a comparison point, we will see several examples of how quality tools and strategies can help protection practitioners enhance protection through individuals, procedures, and technology.
People
Organizational understanding of protection may be a crucial success factor in building its security defenses. People are subverting protection systems because of human behavior, not because they try to be evil. The consumer opens an email that states, YOU BEEN ATTACKED, and automatically wants to access it. They get an executable that lets them see dancing gophers because, of course, they want to see it. They see somebody at the entrance, with their hands full of pizza and beer, and they cannot get in; they want to help because they’re going to unlock the door – and let them through. It is particularly valid when someone in power (or pretending to be in authority) or someone unhappy at them is requested to do anything that does not seem quite right. The help desk worker might be persuaded to send away a secret over the phone if they assume it is the big boss at the other end.
There are instances of individuals pursuing an inherent interest, common decency, and a sense of self-preservation. A clear knowledge of protection, coupled with a sound strategy, would provide them with the resources to cope effectively with any of these scenarios. Security experts will include the content needed to train consumers. Quality practitioners will relate the material to the interests of the company as a whole and define the best strategies for getting the message out. They will also help to assess the efficacy of the course – encouraging technical security experts to focus their resources when they are most important. People ought to work together to get protection work completed (Harris 2019, p. 45). Together with other departments within an organization, such as human resources, QA practitioners have training in identifying appropriate processes within which work may be performed – including reporting roles, obligations, and authority. For example, QA may assist Key in identifying security organizational frameworks that bring all aspects of security under the responsibility of a single senior executive with numerous security divisions under him or her. It means that, for a particular entity and specific cultural requirements, there may be a division of responsibilities with protection checks and balances. Using guidance from the security personnel, philosophy, and strategic vision of the organization, a framework that suits each community’s objectives may be placed in place.
Process
QA and quality control involve systematic preparation and execution of quality strategies in the enterprise, which may provide protective policies regarding security personnel. QA may claim that defense is an important part of the corporate plan at the enterprise’s highest level. A strong protection framework should be published with the corporate approach known. Examining and articulating compliance policy at the corporate level forces executives in the Main Company to operate towards the daunting challenge of defining which APTs are appropriate and which are not (Harris 2019, p. 45). It involves distinguishing certain items that can only be approved by an enterprise and those that can be accepted by a single business entity or agency on behalf of the organization. Understanding that the company cannot provide both 100 percent protection and 100 percent functionality, cost versus reward can be taken into consideration in decisions regarding specific communication. Appropriate danger also includes a discussion of risk reduction practices and alternatives. Security experts should be willing to negotiate at an acceptable level with company administrators what sort of risks occur as networks are linked to ‘unknown’ networks, like the Internet. Both variables can help to determine what the danger is and how it can be mitigated. The way the probability may be minimized contributes to consideration about what procedures might be in operation.
Protection is a mechanism and not a product. A process can be described as a series of related or interactive activities that turn inputs into outputs. Processes describe what must be achieved, with who, and when (Javidi and Sheybani 2018, p. 69). They are engaging in unique practices, likely utilizing technologies, and together they are beginning to build walls that function as protection for an enterprise. Well-documented protection protocols, processes, and procedures enable individuals to take the appropriate steps without fear of reprisal. Even without a well-documented protocol, systems may be structured to enhance protection.
QA experts may often aid in recording the protocols needed for the design, execution, promotion, and tracking of protection measures. Working together with security professionals, they will help track and identify the activities required by device employees, customers, and business citizens to support these processes and procedures on a day-to-day basis (Harris 2019, p. 45). As legislation, procedures may be prone to change as systems change, as organizations reorganize, or when a new framework is adopted; however, it enables the policy to function as a meta-document and offer guidance. Regulation should stay wider and higher, whereas protocols are utilized to address who performs what, where. Well-documented and closely-followed protocols can define liability and include an audit trail at a point of a possible violation.
Since protection technologies and vulnerabilities are evolving too rapidly, protocols need to be designed with certain versatility to enable work to be completed quickly in reaction to the company’s danger. Notifications of emergency shifts, for example, cannot be set so rigidly that sufficient preventive or sensitive action is not taken since the one right person could not be reached. In situations like this, a mixture of protocols and policies can enable the individual with the experience to do the correct thing without fear (Moyon et al., 2018, p. 31). It is worth noting that executives are expected to make choices on a regular basis with insufficient knowledge and large potential costs. Safety experts would do the same to be trusted to make the correct decision based on business policies.
Technology
The technical emphasis specifically belongs to the security specialist in a joint attempt to improve security in an organization. The development of procedures and strategies with a consistent technological security base would help to accomplish the aim of enhanced security. However, there are ways for QA to help their reliable, repeatable handling through the usage of technology by individuals. The following are samples of best practices in the field of technological protection whose Main can be provided by QA techniques. To protect the network’s security and the applications and information on the network, steps must be taken to avoid attacks from accessing the network and to identify attacks as they arise.
Network and security professionals use various methods to track, record, and evaluate data streaming through the network. One approach to view the findings is to equate what they see with the IP’s usual actions. In certain instances, network and protection engineers will often build software or scripts to understand the results (Ebert and Paasivaara 2017, p. 98). There are applications, such as tcpdump and nmap, that can include substantial details about what is occurring on the infrastructure and on the networks. Nevertheless, the usage of these methods cannot be compatible with one engineer and the next. The understanding of the tools will often differ from individual to person. Custom instruments that are used can not necessarily generate the intended findings due to basic reasoning errors, glitches, or misconfiguration. In many situations, the results are viewed based on what the programmer feels they are observing, rather than being measured against a baseline.
There are some of the areas QA has familiarity in measurement and calculating strategies. Basic areas in which QA may aid include using strong monitoring methods and verify research software and guarantee the validity of the results and support across the network and their processes to confirm that they are clean. QA will log what is inspected, what is observed, and what needs to be re-examined in the future, utilizing what tools. This would further set the foundations for analysis if a break-in happens. One of the four main QAI strategies for their strategic phase is to ‘manage by reality (Sharp and Lang 2018, p. 49). If the evidence does not support the argument that there has been an improvement, the point is far more complicated to render when it comes to the cops, the FBI, or even the government. Help describe the method, not the methods used to do the analysis, namely who performs it, when it is performed (how often), what is reported, and how it is published.
Assuming the position of process auditor. This will maintain transparency. Set up protocols to support a research center where intrusions and infiltration methods may be checked. These procedures can help to minimize the probability that no attack can escape into the manufacturing environment. Establish a procedure for receiving permissions before testing or checking network assaults. Implement systems and protocols, streamlined when appropriate, for continuous scanning and mapping known APTs and potential exposures. It is necessary to remember that the individual conducting QA activities and protection activities can be the same person. Throughout this article, it is believed that the methods utilized by each discipline can be complementary to the other. Educating protection practitioners in QA practices and methodologies will likely be a valuable way to start the process. It should be necessary for QA practitioners to consider the safety basics to ensure that they do what they can. Much of what goes into defense is not complex but requires discipline and caution. Maintaining the consistency of every sort involves the same discipline. The introduction of protocols that facilitate the efficiency and protection of technology applications builds this diligence into the overall job structure.
As internet-based vulnerabilities continue to expand, as businesses provide greater electronic access to each other, as policymakers work to secure Internet users’ privacy and customers more specifically identify what they expect from web interaction, the efficiency and protection of the overall networks need to adapt to these needs. As a closing illustration, this section illustrates how organizations are continuing to identify safety standards at their partner sites as well asands. Organizations should seek to handle the confidentiality of consumer data from end to end. Part of their ‘simple guidelines’ is the mandate that traders’ regularly monitor protection processes.’ Having sound procedures and equipment to conduct these checks and having overall quality controls would bring firms ahead of the pack regarding common standards, government oversight, peer criteria, and consumer preferences. QA and safety engineers will establish a strategic partnership to endorse a business plan. With protection and efficiency as part of the way an organization conducts business, core focus can be given to fulfilling its clients’ desires and demands in a profitable way.
1.12. Assessment of the Risks of the Implementation
The assessment of risks will follow several steps to achieve the project objectives.
Identify and prioritize assets: Resources include servers, customer contact records, confidential partner papers, company secrets, and so on. Know, what you think is important as a technician may not be what is most valuable to the company. Therefore, you ought to collaborate with company users and administrators to build a list of all valued properties. Collect the following details for each asset, as applicable: Devices, Hardware, Files, Interfaces, Users, Support Staff, Task or Intent, Criticality, Functional Specifications, IT Security Policy, IT Security Design, Network Topology, Information Storage Safety, Information Flow, Technological Security Control, Physical Security Climate, and Environmental Security. Since most organizations have a small risk management budget, you are likely to have to narrow the reach of the remaining mission-critical assets measures (Moyon et al., 2018, p. 34). Therefore, you need to establish a norm for evaluating the value of each commodity. Popular requirements include the asset’s numerical worth, its legal status, and its significance to the company. If the criteria have been accepted by management and officially integrated into the security risk appraisal strategy, use it to identify each asset as essential, significant, or minor.
Identify the Hazard
The danger is something that might affect the company. Though hackers and ransomware are likely to come to mind, there are also other kinds of APTs: natural disasters. Floods, storms, earthquakes, fire, and other natural disasters will damage records and servers and appliances. Think of the likelihood of various kinds of natural hazards when choosing where to hold the servers. For example, your region might have a high risk of flooding but a low risk of tornadoes. Failure of electronics. The risk of hardware malfunction depends on the consistency and age of the server or other computer (Ebert and Paasivaara 2017, p. 100). The risk of loss is poor for relatively modern, high-quality appliances. But if the machinery is outdated or comes from a no-name seller, the risk of loss is far greater. This hazard should be on your radar, no matter what company you’re in. People can accidentally delete important data, click on a malicious connection in an email, or spill coffee on a piece of equipment that hosts critical systems. This is malicious conduct. There are three forms of malicious behavior: intrusion is where someone does harm to your company by removing records, designing a distributed denial of service (DDOS) against your website, physically stealing a device or server, and so on. Interception is stealing of your knowledge. Impersonation is a manipulation of somebody else’s credential and is sometimes obtained by social engineering attacks or brute-force attacks or acquired by the dark web.
Analyze the Controls
Analyze safeguards that are currently in operation or in the preparation stage to reduce or remove the risk of a danger leveraging a weakness. Technical safeguards provide confidentiality, intrusion prevention systems, recognition, and authentication methods. Non-technical safeguards cover security procedures, management activities, and physical and environmental mechanisms. In addition, all mechanical and non-technical controls may be categorized as protective or investigator (Sharp and Lang 2018, p. 45). As the name suggests, protective systems are intended to detect and deter attacks; examples include security and authentication technologies. Detective controls are used to track attacks that have arisen or are in progress; these involve audit trails and intrusion detection systems.
IT Risk Mitigation with Data Classification as well as Access Management
Analyze the effect that an event will have on the asset that is destroyed or harmed, using the following factors:
• The asset’s mission and the procedures that rely on it.
• The importance of the asset to the company.
• The vulnerability of the asset.
To obtain this knowledge, begin with a market impact analysis (BIA) or project impact analysis survey. This paper utilizes either quantitative or qualitative means to assess the effect of disruption on the company’s knowledge properties, such as lack of security, credibility, and availability. The effect on the mechanism may be measured qualitatively as high, medium, or poor.
Recommend Controls
Determine the steps required to minimize the danger using the risk level as a basis. Here are several basic recommendations for each degree of risk: High—A corrective action plan should be developed as quickly as possible. Medium—A corrective action plan should be developed within a realistic timeline. Low—The team must determine whether to recognize the danger or take corrective measures. When you analyze controls to minimize each risk, be sure to consider: organizational strategies, cost-benefit analysis, operational effects.
Facilities, and Relevant Legislation, the general efficacy of the recommended controls and protection and reliability.
Record the Outcome
The final step of the risk assessment process is the development of a risk treatment plan to support managers in formulating rational expenditure choices, plans, procedures, and so on. The study should describe the relevant vulnerabilities, the properties at risk, the effects on your IT infrastructure, the possibility of the incident, and the control guidelines for each danger.
1.13. Technology Environment Tools, Related Costs, and Human Resources
As businesses seek to defend their operating networks, data, and personnel from cyber assaults, they have invested heavily in network protection software designed to secure the network’s infrastructure against malware, worms, DDoS attacks, and other risks. Fewer organizations have invested in device protection research, which is meant to eliminate bugs and weaknesses from software that could expose the door to other forms of assaults. But when software – and web applications – have become the number one attack vector for malicious hackers, more companies are counting device protection monitoring solutions among their most critical network security resources (Ebert and Paasivaara 2017, p. 100). While vulnerability assessment, transport layer defense, network access management, and vulnerability scanning remain key network security tools, application security testing can help identify and remediate current and existing vulnerabilities that can lead to serious infringements, network compromises, and computer security.
On-Demand Testing and Network Protection Software.
As the most appropriate setting, cloud-based application protection testing tools help companies secure the applications that power their enterprises. Our robust testing services are accessible on requests based on a single framework, allowing developer teams and IT managers to evaluate code more efficiently, conveniently, and cost-effectively. We provide the device and network protection resources that companies use to validate apps at every stage in the SDLC, from static analysis or services that identify bugs when code is compiled to static response and site scanning services. With Veracode software and network management techniques, companies can gain better protection without losing efficiency or time-to-market. Developers can send a code for review via the Veracode Application Protection Portal, obtaining most programs within four hours. Our reports are very accurate, reducing false negatives to save effort and time. Step-by-step remediation instruction allows developers to rapidly identify and repair bugs, and our solutions seamlessly combine with IDEs, meaning that developers would never have to stop code from opening a new testing framework.
1.14. Project Timeline and Milestones
Section 10 of the Federal regulations, Subsection 73, Physical Security of Plants and Products, Section 73.54, Protection of Digital Devices and Communication Network Facilities, requires licensees to have high-QAs that digital computers and communication network infrastructure are adequately secure from cyber assaults, up to including design-based risks. As specified by 10 CFR 73.54(b)(3), the data protection strategy is part of the physical security software. Physical protection and cyber-surveillance measures are mutually inclusive in order to prevent incidents of radiological terrorism. The Security Systems System currently in effect, including the Access Permit Program and the Insider Avoidance Program, supports the safety of plant equipment from undesired access by untrusted individuals. The Insider Prevention Strategy Critical Category was expanded to cover the debate of cybersecurity staff under RG 5.77 and ended by 31 March 2010. In conjunction with the other aspects of the Insider Prevention Program, this action promotes the resolution of insider hazards. The core group of the Insider Protection Program includes any person who has a combination of electronic access, including administrative authority (e.g., server administrator rights) to modify one or more security controls linked to one or more critical digital assets; and any organization with a comprehensive understanding of the site-specific cyber defense policies.
Ensuring physical defense is a crucial driver of cybersecurity by removing threat mechanisms correlated with direct physical entry. The implementation of a causal isolation contact buffer provides security from remote attacks on plant structures. Although the deterministic isolation barrier’s implementation is vital to defending against external cyber-attacks, remote access to plant data systems by authorized staff is also impacted. This remote access removal would include Permits to establish and execute a comprehensive change management strategy (Ebert and Paasivaara 2017, p. 99). Site/Fleet often understands the risks involved with portable media (e.g., USB flash drives, CDs, etc.) and portable computers (e.g., laptops) that link to untrusted networks. Cyber protection management, organizational and technological controls to fix portable media and facilities would be introduced early in the project. Popular control is a security control that, if completely enforced, offers cybersecurity to one or more Sensitive Digital Assets (CDAs) or Critical Systems (CS). The defenses offered by common security controls may be acquired by CDAs as well as CSs in the building. The establishment of standard controls would then be prioritized in the execution of the Cyber Protection Policy. Goal sets are covered in relation to their effect on protection. Goal set devices or elements shall be contained inside a safe or critical region or shall be defined and recorded in compliance with the provisions of §73.55 and shall be expressed in the security plan of the licensee (Ebert and Paasivaara 2017, p. 99). The Site Physical Security Software ensures high confidence that all components are safe designs, risk evaluation, execution, installation, and testing of CDA interface improvements. Any factory changes to the CDAs can be produced when the plant is in service. Changes to CDAs whose duties support protection or operating requirements (e.g., safety, surveillance checks, organizational decisions, technological specification requirements, protection) must be planned and carried out beyond the usual day-to-day service of the CDA. Depending on the magnitude of the transition and the possible effects on the web, the change may take 18 to 24 months to be fully enforced. This period span means that the change is planned and carried out at a time that minimizes the effect on plant protection and activity, up to and including the need to schedule the adjustment during the scheduled plant refueling outage. The following milestones in the introduction of the Cyber Protection Policy apply:
The evaluation of cyber-attack during the production of aim sets shall be carried out in compliance with the law; the cybersecurity policy would improve the defense-in-depth design of the defenses of the CDAs aligned with the goal collection. The final date by which all administrative, operating, and technological cyber protection controls for CDAs will be carried out is set out in the draft Licensee Implementation Plan. Priority execution of core facets of the cyber protection policy will be completed by the establishment of the following items, as defined in the schedule below, by 31 December 2021: Deterministic separation, as described in Defense-In-Deep Plan; preparation of personnel and introducing steps to add cy cy-in-Deep Protective Strategies;
Full development of the network protection policy includes a variety of support activities. Main tasks include software and method development, the performance of individual essential digital asset (CDA) assessments, and detection, coordination, and execution of individual asset access control remediation measures via the site configuration program. Cyber Protection Review Teams are now being set up to meet software specifications. The teams need to have a comprehensive awareness of plant processes and cyber protection management technologies. A rigorous training curriculum would be needed to ensure the completion of the program by competent employees.
Identify Vital Systems (CS) and Essential Data Properties (CDAs)
This achievement builds on work to define vital assets under NEI 04-04. The application of 10 CFR 73.54 broadens the scope of NEI 04-04, and thus the detection of sensitive assets can be reviewed. The above will be carried out by the date of completion: identification of Critical Systems; and identification of Essential Digital Objects.
Establish a data security response plan
The Defense Policy builds on the high-level model in the Cyber Protection Plan and includes evaluating the current site and organizational practices, a comparison of emerging criteria, changes as required, and contact to plant staff. The above will be carried out by the date of completion: documentation of the defense-in-depth framework and protection policy; changes to current defense strategy strategies will be introduced and communicated; and preparation for introducing the defense-in-depth system.
Implementation of cyber protection defense-in-depth design
The installation of contact barriers defends the most important roles of SSEP from remote assaults on our plant networks. Isolating plant management structures from the Internet and large corporate systems is a significant achievement in the war against external APIs. Realizing the threat vectors related to electronic connectivity, priority would be granted to implement hardware-based mechanistic isolation systems. Although the introduction of barriers is crucial to defending against external APTs, access control to core tracking and plant information systems for reactor engineers and other plant personnel is often prevented. The removal of remote monitoring to reactor core monitoring systems involves the establishment and introduction of a comprehensive change management strategy to ensure the continued secure operation of the plants (Stellflue 2020, p. 67). Vendors will need to create program revisions to support the concept. The alteration will be created, prioritized, and scheduled for completion. As software needs to be upgraded and data collected from isolated systems, a method for patching, upgrading, and independent scanning devices will be established. The following will be carried out by the date of completion: installation of deterministic one-way devices to implement protective layer boundaries. The following aspect of this landmark will be achieved by 03/4/2020:
o Deployment of management, organizational and technological cybersecurity controls to resolve APTs by way of portable media, portable computers, and portable devices will be completed.
Develop Cyber Protection Policy/Procedures
The cyber protection software execution is likely to entail policy/procedure formulation and/or updates for virtually any plant department. Procedural developments for the specifications of the cyber protection program and all human security measures would be far-reaching. Many security measures would include implementing technological mechanisms for enforcing control in the nuclear plan setting, including the creation of new surveillance, continuous inspection, and review procedures. The production of procedures will begin early in the software’s execution and will proceed until the date of completion specified.
The implementation and adjustment of related policies and procedures will be carried out through a risk-based approach, which will enable security measures to be handled as defined during the evaluation. The following will be enforced by the date of completion: policies/procedures will be revised to define the Cyber Protection Program; the Cyber Security Evaluation Procedure will be issued, and new policies/procedures or changes to current policies/procedures in areas impacted by cybersecurity standards will be established and implemented; and, by 12/31/2021, the following aspects of this milestone will be developed and implemented.
Conduct and log the data protection evaluation mentioned in the Cyber Security Strategy
Based on the current data protection policy, it is understood that there is a significant amount of digital assets awaiting evaluation. As mentioned above, the CDA evaluation approach needed for this Regulation is highly robust and deterministic. A large investment of capital would be needed to complete these evaluations. The tests will not begin until the CSAT and the protocols needed have been thoroughly defined (Sharp and Lang 2018, p. 45). The evaluation will entail the involvement of various disciplines and may include document checks, device design tests, physical walking downs or electronic verification of all contact pathways for each CDA, and recording of outcomes. These activities may need to be organized and arranged to comply with departmental resource utilization and device control specifications. The above will be carried out by the date of completion: data protection reviews will be carried out and reported.
Implement protection measures that do not need plant alteration.
The Cyber Protection Software is being applied, and the program has reached the maintenance process. While the extent of the individual CDA remediation appraisal activities is unclear, a substantial effort is anticipated based on the amount and sophistication of the protection measures needed. Any of the specific CDA remediation steps would need to be prepared, resourced, and enforced. This date is just a promise towards remediation activities that do not include a modification of the plant. Changes involving plant alteration can be made during the continuing management of the cyber protection software. A comprehensive preparation procedure is used to ensure the safe implementation of outage refueling work. Potential device changes needed by this Legislation need to be deliberately engineered and enforced to ensure that healthy plant operations are not adversely affected.
The software would be deemed to be applied and shifted to the maintenance process once the changes have already been applied or are budgeted and planned to be introduced. The following shall be carried out by the date of completion: security measures (which do not include plant modification) shall be carried out. The installation of security controls involving plant changes would be prepared, budgeted, and scheduled. Starting from that day, during the continuing maintenance of the program, the following will be included: the provisions of Section 4 of the Plan will be effective, and the execution of plant changes, as set out in the timetable mentioned above, will not be completed.
Implement security measures that include alteration of the facility.
By this date, where some pending changes or controls enabled a planned plant refueling shutdown to be performed, the following: adjustments are implemented, protocols are updated, and preparation is completed. By this date, all management, operating, and technological protection controls for CDAs will be carried out.
1.15. Framework for Accessing the Project
The project would follow an agile architecture methodology for optimizing defense activities in the business environment. The agile growth approach focuses on enhancing the security architecture and operational I.T. infrastructure to allow the enterprise to handle its information systems (Wright, 2020). This solution includes virtualizing the operating system environment, incorporating Hot Site Swap, designing and implementing a cloud-based host-based security mechanism, and implementing an intrusion detection system. In addition, network kill switches on the virtual LAN would be installed to route traffic to a hot spot, offering efficient surveillance of A.P.T.s.
Agile is the umbrella term for multiple iterative and incremental software creation processes, each of which has its own Agile framework. The most popular Agile models are Scrum, Crystal, Complex Systems Creation Process, and Feature-Derived Production. Mendix is subscribed to the Scrum technique (Abdelkebir, Maleh, and Belaissaoui 2017, p. 5). While each type of Agile methodology has its own distinctive characteristics, they incorporate iterative development facets and continuous feedback while creating an application. Each Agile software process involves continuous preparation, continuous testing, rigorous execution, and other aspects of the professional production of both the project as well as the product resulting from the Agile method. Every Agile architecture is lightweight. Laws and protocols are kept to a minimum compared to traditional waterfall-style construction approaches and are designed to be flexible to all types of situations. Instead, the priority is on empowering developers of all types to collaborate and create decisions together as a group securely and effectively (Cao, Mohan, Xu, and Ramesh 2009, p. 332). The Agile Development Method’s genius vision is to create applications in tiny increments, with each increment reviewed before it is assumed to be complete. This approach ensures that the product specification is designed rather than checked for consistency later.
Scrum is among the most widely used Agile frameworks. There are two key roles to be performed in the Agile framework named Scrum: The Scrum Master and the Product Holders. The Scrum Master is a coach and a gatekeeper, too (Cao, Mohan, Xu, and Ramesh 2009, p. 332). The Scrum Master is responsible for executing the Agile Structure, giving feedback, planning to your Scrum Squad, and removing obstacles and disruptions that hinder the team from working. The Project Manager of the Scrum Team is, foremost, the project’s subject (Cao, Mohan, Xu, and Ramesh 2009, p. 332). The Product Owner keeps track of the project members’ goals and defines and gathers the equipment and resources required by the Scrum Team. In addition, a Product Owner communicates its vision to further set targets (Cao, Mohan, Xu, and Ramesh 2009, p. 332). The Scrum Master and the Product Owner are coordinating and managing your Scrum Team, which is constantly under production. This team seems to be made up of a broad number of cross-disciplinary members, including engineers, designers, architects, and testers.
In the 1990s, several lightweight approaches, such as Intense Programming, Complex Systems Creation Technique, Scrum, and Crystal Clear, were created as alternatives to the conventional method in response to heavyweight software development strategies (Cao, Mohan, Xu, and Ramesh 2009, p. 332). In 2001, representatives of the lightweight approaches united and released the Agile Manifesto. As a result, software development approaches under the Agile Manifesto framework have been considered since then. However, since Agile approaches were introduced ten years earlier, some modern I.T. problems, particularly the protection of the Information Infrastructure, are not incorporated in the framework. Security must be incorporated into the Agile Product Creation Methods to prevent vulnerabilities to I.S. This segment discusses the approaches used to incorporate protection into the methods of Agile Software Creation. To explain how the approaches operate, the paper would first include a description of agile processes, I.S.S., scrum, complex framework creation, and severe programming (Cao, Mohan, Xu, and Ramesh 2009, p. 332). This will be accompanied by exploring recent research on protection adoption into three agile approaches (May, York, and Lending 2016). The paper will end with an evaluation of the various methods for incorporating protection into each agile process.
Data Security and the Agile Approach
Agile Processes are software development strategies that adopt the Manifesto to develop the Agile Software. It is worth noting that the Agile Manifesto Türpe and Poller (2017) states: We are finding new approaches to create apps through doing so and encouraging others to do so. Through this job, we have grown to value: person and process-to-tool interactions, operating tools over detailed documentation; client cooperation over negotiation process; and adapting to shift over schedule. Though there is meaning in the items located on the right, the current study values the elements on the leftmost. The key distinction between the Agile models versus the conventional waterfall approach is the creation framework. In the waterfall approach, all specifications and criteria of the product adopt a sequential design process. The project is split into iterations of the Agile growth methods. A part of the commodity specifications would be introduced for each iteration. When an iteration has done, the iteration life cycle will continue before all product specifications have been fulfilled.
Agile is a common methodology for software creation. Abdelkebir, Maleh, and Belaissaoui (2017) stated relative to other growth strategies such as Ad hoc, Lean and Standard. Agile methodologies produce greater R.O.I., efficiency, stakeholder loyalty, team productivity, and implementation speed. It is among the three most popular software creation methodologies (Türpe and Poller 2017). There are many approaches, which apply the principles of the Agile Manifesto. Well-known agile methodologies include Scrum, Intense Scripting, Logical Cohesive Approach, Hybrid Machine Creation Method, and Functionality Oriented Development (Cao, Mohan, Xu, and Ramesh 2009, p. 332). It means that numerous strengths, as well as disadvantages, characterize each approach or strategy. Considering the various features associated with the device, the implementation team can select the right method or combine a variety of methods.
Technology
I.S.S. is defined as the protection of I.S. against the access that does not have authorization to the modification information, whether in processing, storage, or even transit. Also, the I.S.S. offers protection against the D.O.S. to authorized users. The unauthorized users may include the measures necessary to detecting, documenting, and counter these kinds of vulnerabilities (Arthur and Dabney 2017). To assess information systems’ security, the C.I.A. triangle has been adopted as the market norm for data security since the mainframe was created (Klepper 2020). The C.I.A. triangle contains three characteristics of protection of information: secrecy, honesty, and availability. Confidentiality: knowledge is secured from unwanted persons or programs. Integrity: knowledge is accurate, uncorrupted, and complete. Availability: registered users in the proper format may download the details without interruption or obstruction. Suppose I.S.S. is defined as a building, the C.I.A. triangle bases on anonymity, honesty, and availability. To sustain a stable system, Information Technology Compliance, including I.S.S. and policy implementation, must be enforced on an ongoing basis.
Integrating defense into Agile Development Methodology
There are many challenges that the Agile team must address when incorporating protection into Agile App Development: first, there is the burden of short iteration (Arthur and Dabney 2017). The limited period of each iteration in the development of Agile applications (only a few weeks) is not sufficient when it comes to running the required evaluations. There is still a shortage of computer security expertise (2013). Most programmers may not have adequate information regarding protection concerns. Consequently, some programmers tend to disregard protection concerns. Another additional problem is the absence of knowledge of protection (Bartsch, 2011). Protection concerns need to be addressed while designing specifications. However, consumers should not care about protection since computer security cannot allow the project to make a profit. It is only preventing consumers from wasting their income. Safety scientists still face the question of the compatibility of security operations and agile methodologies. Threat Education and Understanding, Constructing Protection Unit, Static Code Analysis, Security Specifications Analysis, and Review Concept Security are the main five security practices consistent with Agile methodologies (Arthur and Dabney 2017). Threat Simulation is Agile’s largest security-relevant obstacle. Danger modeling is complex, and there is no requirement for consumer engagement (Klepper 2020).
Suggested incorporation of defense practices into Agile Methodology
While Agile Methodologies follow similar concepts, each Agile Methodology has various methods of implementing protection. This section reflects on three agile software development strategies, which include Extreme Programming, Scrum, and the Complex Process Creation Approach. Since only one or two safety integration strategies have been implemented with each process, several techniques are absent in the executive report.
References
Abdelkebir, S.A.H.I.D., Maleh, Y. and Belaissaoui, M., 2017, November. An Agile Framework for ITS Management In Organizations: A Case Study Based on DevOps. In Proceedings of the 2nd International Conference on Computing and Wireless Communication Systems (pp. 1-8).
Abdelkebir, S.A.H.I.D., Maleh, Y. and Belaissaoui, M., 2017, November. An Agile Framework for ITS Management In Organizations: A Case Study Based on DevOps. In Proceedings of the 2nd International Conference on Computing and Wireless Communication Systems (pp. 1-8).
Arthur, J.D. and Dabney, J.B., 2017, April. Applying standard independent verification and validation (IV&V) techniques within an Agile framework: Is there a compatibility issue?. In 2017 Annual IEEE International Systems Conference (SysCon) (pp. 1-5). IEEE.
Bawany, N.Z. and Shamsi, J.A., 2019. SEAL: SDN based secure and agile framework for protecting smart city applications from DDoS attacks. Journal of Network and Computer Applications, 145, p.102381.
Bouazzaoui, S. and Daniels, C., 2020, March. Electronic Healthcare Record and Cyber Security Threats: A Development of an Agile Framework. In ICCWS 2020 15th International Conference on Cyber Warfare and Security (p. 67). Academic Conferences and publishing limited.
Cao, L., Mohan, K., Xu, P. and Ramesh, B., 2009. A framework for adapting agile development methodologies. European Journal of Information Systems, 18(4), pp.332-343.
Ebert, C. and Paasivaara, M., 2017. Scaling agile. Ieee Software, 34(6), pp.98-103.
Ebert, C. and Paasivaara, M., 2017. Scaling agile. Ieee Software, 34(6), pp.98-103.
Fireeye (n.d.) Advanced Persistent Threat Groups. Retrieved from https://www.fireeye.com/current-threats/apt-groups.html.
Georg, L. (2017). Information security governance: Pending legal responsibilities of non-executive boards. Journal of Management & Governance, 21(4), 793-814.
Harris, A.B., 2019. Exploring The Agile System Development Best Practices Cybersecurity Leaders Need to Establish A Cyber-Resilient System: A Phenomenological Study (Doctoral dissertation, Colorado Technical University).
Horton, S., 2020. Are Software Security Issues a Result of Flaws in Software Development Methodologies? (Doctoral dissertation, Utica College).
Javidi, G. and Sheybani, E., 2018, October. K-12 cybersecurity education, research, and outreach. In 2018 IEEE Frontiers in Education Conference (FIE) (pp. 1-5). IEEE.
Klepper, S., 2020, November. How to Integrate Security Compliance Requirements with Agile Software Engineering at Scale?. In Product-Focused Software Process Improvement: 21st International Conference, PROFES 2020, Turin, Italy, November 25–27, 2020, Proceedings (Vol. 12562, p. 69). Springer Nature.
Klepper, S., 2020, November. How to Integrate Security Compliance Requirements with Agile Software Engineering at Scale?. In Product-Focused Software Process Improvement: 21st International Conference, PROFES 2020, Turin, Italy, November 25–27, 2020, Proceedings (Vol. 12562, p. 69). Springer Nature.
Kumar, R., & Goyal, R. (2019). On cloud security requirements, threats, vulnerabilities, and countermeasures: A survey Computer Science Review, 33, 1-48.
May, J., York, J. and Lending, D., 2016. Play ball: bringing scrum into the classroom. Journal of Information Systems Education, 27(2), pp.87-92.
Moyon, F., Beckers, K., Klepper, S., Lachberger, P. and Bruegge, B., 2018, May. Towards continuous security compliance in agile software development at scale. In 2018 IEEE/ACM 4th International Workshop on Rapid Continuous Software Engineering (RCoSE) (pp. 31-34). IEEE.
Sharp, J.H. and Lang, G., 2018. Agile in teaching and learning: Conceptual framework and research agenda. Journal of Information Systems Education, 29(2), pp.45-52.
Singhal, A., 2011, January. Development of agile security framework using a hybrid technique for requirements elicitation. In International Conference on Advances in Computing, Communication and Control (pp. 178-188). Springer, Berlin, Heidelberg.
Stellflue, S.M., Emeraldal Technologies LLC, 2020. Agile human resources method within the financial informatics field. U.S. Patent Application 16/914,760.
Türpe, S. and Poller, A., 2017. Managing Security Work in Scrum: Tensions and Challenges. In SecSE@ ESORICS (pp. 34-49).
Wright, C. (2020). Essentials for selecting a network monitoring tool. Network Security, 2020(4), 11-14.
Yanakiev, Y. and Tagarev, T., 2020, June. Governance Model of a Cybersecurity Network: Best Practices in the Academic Literature. In Proceedings of the 21st International Conference on Computer Systems and Technologies’ 20 (pp. 27-34).
Pohl, C. and Hof, H.J., 2015. Secure scrum: Development of secure software with scrum. arXiv preprint arXiv:1507.02992.
Order | Check Discount
Sample Homework Assignments & Research Topics
Tags:
Emerging APT from China: Focus on APT 17