This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

How aspects of computer technology have evolved

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Abstract

This research paper looks at how aspects of computer technology have evolved. The introduction looks into the importance of advancements in pipelining, virtual memory, RISC, and cache memory. The literature review looks into the history of these concepts. The research establishes here that the gap between the performance of processors and memory has steadily increased since the 1980s. The methodology chapter accredits the analysis content of secondary qualitative data in research. The discussion and results sections focus on the current information acquired from the methodology approach of the four computer aspects. Notably, pipelining is RISCs processors’ typical feature. Finally, the paper concludes that the system performance of computer technology has changed positively.

Contents

Abstract 2

Chapter 1: Introduction. 3

Chapter 2: Literature Review.. 4

2.1 RISC.. 4

2.2 Pipelining. 4

2.3 Cache Memory. 5

2.4 Virtual Memory. 5

Chapter 3: Methodology. 6

Chapter 4: Results and Discussion. 7

4.1 RISC.. 7

4.2 Pipelining. 7

4.3 Cache Memory. 7

4.4 Virtual Memory. 8

Chapter 5: Conclusion. 9

References. 10

 

 

Chapter 1: Introduction

Over the years, computer technology has undergone evolution to improve its performance. RISC reduces the CPI of a computer’s microprocessor as compared to CISC. RISC’s distinguishing feature is its optimization of instruction to ensure the pipeline flow of instruction is highly regular. With pipelining, the elements of data processing are in order. The output of one is the input of the next. On the other hand, the cache is a component that makes future data requests fast. In contrast, the virtual memory creates a massive memory illusion by appearing to exist as the central storage, although most of its share is in the secondary one. Advancements in system performance are greatly attributed to the evolution of RISC, virtual and cache memory and pipelining, over the last 25 years, with virtual memory being the most important because of its ability to conceal physical memory fragments.

 

Chapter 2: Literature Review

2.1 RISC

The concept to date got established in the ’80s. PA-RISC, ARM, Atmel AVR, Alpha, and M88000 are a few examples of RISC design varieties. In 1974, John Cocke proved 80% of the work that the computer resulted from 20% of the instruction. RISC based systems have had more extensive use due to the utilization of ARM architecture processors (Patterson, 2017). In January 2020, TOP500 ranked Summit as the fastest supercomputer in the world. It uses RISC processors. RISCs characteristics are similar to Alan Turing’s ACE. The load/store approach of several systems gets credited to be RISC’s first architecture. In 1980, the Berkeley RISC project began. It applied register windowing and pipelining to gain performance. The MIPS project kicked off in 1981 (Patterson, 2017). To make it run, it used pipeline and an aggressive clock cycle.

2.2 Pipelining

Pipelining can be Graphics, HTTP, software, or instruction pipelines. A simpler version of pipelining got used in ZI and Z3 in 1939 and 1941, respectively. Its seminal use started in the IBM Stretch and the ILLIAC II projects in the ’60s. The ’70s marked earnest pipelining, and most companies worldwide were using it in the mid-’80s (Repetti et al., 2017). As it is not limited to supercomputers, there was a seven-step pipeline in the Amdahl Corporation for mainframe computers. Self-modifying codes also pose challenges on pipelined processors such as failure of modification happening. In 1987, Zilog Z280 overcame it by configuring its on-chip cache memory for fetching data (Repetti et al., 2017). Upon executing an uninterruptible instruction, it makes other usual instructions uninterruptible as well. In 1996, the Cyrix coma bug made single-core systems hang by employing an infinite loop whereby the pipeline always had an uninterruptible instruction.

2.3 Cache Memory

Having a small high-speed cache memory while using a sizeable low-speed memory, is an economical way of improving performance. In the ’60s, mainframe computers were using physical memories mapped on flat virtual spaces of memory. The gap in performance between memory and processors has grown since the ’80s (Mittal, 2014). The 360 models 85 was the first use of data cache that got documented. Having a loop mode, one can consider the Motorola MC68010 released in 1982 as a cache. It accelerates loops that have two instructions. In 1984, the Motorola 68020 replaced it. It had 256 bytes instruction cache. The 1994-released Motorola 68060 had more functions (Mittal, 2014). Its eight KB instruction data and eight KB data cache were four-way associative. It also had a 256 entry branch cache and 96-byte FIFO instruction buffer.

2.4 Virtual Memory

The introduction of virtual memory facilitated and eased the extension of primary memory. In 1956, Güntsch, a German physicist, invented a form of cache memory in his doctoral thesis. The high-speed memory that he proposed was to have a copy of some data and code blocks derived from a drum (Kamar, 2018). The Güntsch system is analogous to computers that have cache memory. For example, Model 85 had real addresses but a cache store that was invisible to the user. The store held the content used by programs that were getting executed at that time. The first actual system of virtual memory had a single-level storage system and was part of Atlas Computer. It used a paging mechanism to plot the available virtual addresses to real memory. In 1961, Burroughs Corporation released B5000 independently (Kamar, 2018). It became the first commercial computer to have virtual memory. Instead of paging, it used segmentation.

 

Chapter 3: Methodology

The research question investigated the evolution and modern trends to improve the performance of systems. It used secondary qualitative data. This research is reliable because the articles used are in the UMGC Library, while the journals used are peer-reviewed. Content analysis ensued and proved useful in establishing the evolution of computer technology.

 

Chapter 4: Results and Discussion

4.1 RISC

RISC microprocessors operate at a faster speed because their design enables them to do several small types of computer instruction. Thoughtful microprocessor designs are revolving around the RISC Concept. A case in point is the mapping of instruction to the microprocessor’s clock speed (Patterson, 2017). RISC and designs related to it are advantageous as well. First, their simplicity gives one more liberty to choose how to use space on microprocessors. Second, compilers of high-level language produce codes that are more efficient since they tend to use small instructions set in RISC computers. Finally, less complicated microprocessor development and testing are quicker.

4.2 Pipelining

RISC processors have pipeline as a standard feature. It is like an assembly line because the processors work on different instruction steps simultaneously. Subsequently, the execution of instruction takes a short time. The pipeline’s length depends on the length of the longest stage. RISC instructions are more pipelining-conducive than pre-RISC processors, CISC, because they are more straightforward (Repetti et al., 2017). One operation can fetch RISC instructions as they are of the same length as opposed to CISC instructions whose lengths vary. However, processors stall at times due to branch instructions and data dependencies. Branch prediction and code reordering are solutions to these problems. Super-pipelining is among the developments devised. It is the division of pipeline into more steps, rendering them shorter, thus faster.

4.3 Cache Memory

Cache memory is a high-speed memory used in synchronization and speeding up with CPU’s high speed. It is cheaper than CPU registries but costlier than disk memory. Acting as a buffer between the CPU and RAM, it holds instructions and data that are frequently requested, making them readily available for the CPU. Reading and writing a location in the main memory requires the processor to check cache entry. Either cache misses, or cache hits occur. They happen when the processor fails to discover or discovers cache’s memory location, respectively. Hit ratio measures cache performance quantity (Mittal, 2014). It gets improved by actions such as reducing both the miss penalty and the miss rates.

4.4 Virtual Memory

With virtual memory, addressing the secondary memory takes place as if it is part of the main memory. The addresses that the memory uses in the identification of physical storage sites get distinguished from the ones that programs use for memory reference (Kamar, 2018). Demand Segmentation and Demand Paging are ways of implementing virtual memory. Demand paging happens upon the occurrence of a fault page, making the page to get loaded on demand into the memory. It proves advantageous as the main memory maintains more processes.

 

Chapter 5: Conclusion

Computer technology is ever-evolving. RISC has steadily undergone development since the 80’s to date. The load/store approach was its first architecture. Today their design enables them to do several small types of instructions. Pipelining began developing in the ’70s and is in both supercomputers and mainframe computers. It shortens the execution time by simultaneously works on different instruction steps. With cache, the focus has shifted from aspects like its cycle time to fault tolerance. This high-speed memory speeds up and synchronizes with CPU’s high speed. Finally, virtual memory extends primary storage. It gets implemented through Demand Segmentation and Demand Paging. Content analysis of peer-reviewed articles enabled the collection of secondary qualitative data on the evolution of computer system performance. Virtual memory is the most important as it hides physical memory fragments, thus rendering application programming easy. It does it by making the kernel manage the hierarchy of memory. This way, the program does not have to deal with overlays explicitly.

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask