Examination of Computer System Components
VerifiedAdded on 2023/01/05
|33
|10663
|81
AI Summary
This study material provides an examination of the function of computer system components, including the logical and physical component functions. It also discusses different types of memory and how to attach memory to the processor. The content covers topics related to hardware and software in detail.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Running head: GLOBAL HARDWARE AND SOFTWARE
GLOBAL HARDWARE AND SOFTWARE
Name of the Student
Name of the University
Author Note:
GLOBAL HARDWARE AND SOFTWARE
Name of the Student
Name of the University
Author Note:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1GLOBAL HARDWARE AND SOFTWARE
Table of Contents
L01: Examination of function of computer system Components........................................3
Introduction......................................................................................................................3
Logical/Physical component function.............................................................................3
Different types of memory, roles and attaching memory to the processor......................4
Description how processors are connected to the devices using buses and memory......6
Conclusion.....................................................................................................................10
References..........................................................................................................................11
L02: Discuss how data and programs are represented in the computer system................12
Introduction....................................................................................................................12
Discussion......................................................................................................................12
Representation of data and programs in the computer system......................................12
Conversion of floating point and storage in computer..................................................13
Boolean logical operation inclusive of adder circuit used for adding binary numbers. 14
ICE Target system debug operate in system..................................................................16
Conclusion.....................................................................................................................17
References..........................................................................................................................18
LO4: Investigate advance computer architecture and performance..................................19
Introduction:..................................................................................................................19
Discussion on DirectX API:..............................................................................................20
Demonstrating the pros and cons of the DirectX API:......................................................21
How DirectX API can control the graphics functions:......................................................22
Critically evaluations computer performance developments with Pipelining architectures
and MIMD:....................................................................................................................................23
Discussion on Pipelining architectures:.........................................................................23
Understanding computer performance improvements with MIMD:.............................26
Conclusion:........................................................................................................................27
References:........................................................................................................................28
Table of Contents
L01: Examination of function of computer system Components........................................3
Introduction......................................................................................................................3
Logical/Physical component function.............................................................................3
Different types of memory, roles and attaching memory to the processor......................4
Description how processors are connected to the devices using buses and memory......6
Conclusion.....................................................................................................................10
References..........................................................................................................................11
L02: Discuss how data and programs are represented in the computer system................12
Introduction....................................................................................................................12
Discussion......................................................................................................................12
Representation of data and programs in the computer system......................................12
Conversion of floating point and storage in computer..................................................13
Boolean logical operation inclusive of adder circuit used for adding binary numbers. 14
ICE Target system debug operate in system..................................................................16
Conclusion.....................................................................................................................17
References..........................................................................................................................18
LO4: Investigate advance computer architecture and performance..................................19
Introduction:..................................................................................................................19
Discussion on DirectX API:..............................................................................................20
Demonstrating the pros and cons of the DirectX API:......................................................21
How DirectX API can control the graphics functions:......................................................22
Critically evaluations computer performance developments with Pipelining architectures
and MIMD:....................................................................................................................................23
Discussion on Pipelining architectures:.........................................................................23
Understanding computer performance improvements with MIMD:.............................26
Conclusion:........................................................................................................................27
References:........................................................................................................................28
2GLOBAL HARDWARE AND SOFTWARE
L01: Examination of function of computer system Components
Introduction
Computer can be stated as a combination of software and hardware, which is completely
integrated way. It mainly aims to provide different kind of functionalities for the user. Hardware
is nothing but the physical component for the system like memory devices, keyboard and
processor (Silberschatz, Gagne and Galvin 2018). Software mainly aims to set a list of programs
which is needed by hardware so that it can function properly. It merely comes up some basic
components that help in providing best working cycle for the system. Input process and Output
cycle are considered to be functional part of a system (Richter, Götzfried and Müller 2016). It
merely requires proper input and can easily produce the desired output.
Logical/Physical component function
Computer can be stated as a programmable machine that can easily read binary data
which passes instruction and process in binary data. It merely aims to pass instruction, process in
binary data and provides the required output (Khoroshilov, Kuliamin and Petrenko 2017).Their
digital computer is one which tend to work on the digital data.
Input Unit: This particular unit comprises input devices which are attached to the system.
In these devices, input is taken and completely changed into binary language, whichis
understood by system.
Central Processing Unit (CPU): As soon as the required information is entered in the
system by the given input device, then the processor will mainly process it. CPU is defined as the
brain the system as it is control system of the whole computer. It will merely first fetch the
required data from memory and then interprets it so that the next step can be understood (Kanev
et al . 2016). If needed, then the data is completely being fetched from both input devices and
memory. The main function of CPU is to execute or carry out the required computation along
with storing the overall output. The execution of CPU is done for required computation and then
store the overall output (Yildiz, Lekesiz and Yildiz 2016). The display is provided to the output
devices. CPU merely comes up with three major functions like arithmetic logic unit, control unit
and lastly memory unit.
Arithmetic logic unit: In ALU, all kind of mathematical calculation and logical decision
comes is done. Arithmetic calculation is merely inclusive of addition, subtraction, division and
multiplication (Dihoru et al. 2019). Logical decision is all about comparison of two given data
and analysing which one is larger.
Control unit: The main function of control unit is all about coordinating and controlling
the flow of data which is in and out of CPU. The mere focus is all about controlling the flow of
operation of ALU, memory registers and associated input/ Output units. The mere focus is all
about being carried all the required instruction, which is there in the program (Comer 2017). It
mainly decodes the given instruction along with interpreting it. All the required kind of signals
are sent to both input and output devices until and unless operation is done by both ALU and
memory.
Computers come up both electrical and mechanical equipment in a system is defined to
be as hardware. Some of the examples are motherboard, RAM, Processor, Monitor and Mouse.
Software is mainly used for describing various programs that carry out task on the given system.
Computer can be stated as a collection of both electronic and mechanical devices which operates
at a single unit. System unit are defined as the main container for various system devices. It is
L01: Examination of function of computer system Components
Introduction
Computer can be stated as a combination of software and hardware, which is completely
integrated way. It mainly aims to provide different kind of functionalities for the user. Hardware
is nothing but the physical component for the system like memory devices, keyboard and
processor (Silberschatz, Gagne and Galvin 2018). Software mainly aims to set a list of programs
which is needed by hardware so that it can function properly. It merely comes up some basic
components that help in providing best working cycle for the system. Input process and Output
cycle are considered to be functional part of a system (Richter, Götzfried and Müller 2016). It
merely requires proper input and can easily produce the desired output.
Logical/Physical component function
Computer can be stated as a programmable machine that can easily read binary data
which passes instruction and process in binary data. It merely aims to pass instruction, process in
binary data and provides the required output (Khoroshilov, Kuliamin and Petrenko 2017).Their
digital computer is one which tend to work on the digital data.
Input Unit: This particular unit comprises input devices which are attached to the system.
In these devices, input is taken and completely changed into binary language, whichis
understood by system.
Central Processing Unit (CPU): As soon as the required information is entered in the
system by the given input device, then the processor will mainly process it. CPU is defined as the
brain the system as it is control system of the whole computer. It will merely first fetch the
required data from memory and then interprets it so that the next step can be understood (Kanev
et al . 2016). If needed, then the data is completely being fetched from both input devices and
memory. The main function of CPU is to execute or carry out the required computation along
with storing the overall output. The execution of CPU is done for required computation and then
store the overall output (Yildiz, Lekesiz and Yildiz 2016). The display is provided to the output
devices. CPU merely comes up with three major functions like arithmetic logic unit, control unit
and lastly memory unit.
Arithmetic logic unit: In ALU, all kind of mathematical calculation and logical decision
comes is done. Arithmetic calculation is merely inclusive of addition, subtraction, division and
multiplication (Dihoru et al. 2019). Logical decision is all about comparison of two given data
and analysing which one is larger.
Control unit: The main function of control unit is all about coordinating and controlling
the flow of data which is in and out of CPU. The mere focus is all about controlling the flow of
operation of ALU, memory registers and associated input/ Output units. The mere focus is all
about being carried all the required instruction, which is there in the program (Comer 2017). It
mainly decodes the given instruction along with interpreting it. All the required kind of signals
are sent to both input and output devices until and unless operation is done by both ALU and
memory.
Computers come up both electrical and mechanical equipment in a system is defined to
be as hardware. Some of the examples are motherboard, RAM, Processor, Monitor and Mouse.
Software is mainly used for describing various programs that carry out task on the given system.
Computer can be stated as a collection of both electronic and mechanical devices which operates
at a single unit. System unit are defined as the main container for various system devices. It is
3GLOBAL HARDWARE AND SOFTWARE
required for protecting some of delicate mechanical and electronic devices from any kind of
damage. Some of the included devices are motherboard, disk driver, Ports and expansion cards.
Peripherals are devices which are connected to the given system by using cables or wireless
based technologies. Some of the typical peripherals like monitor, printer, scanner and speakers.
Processor come up with integrated circuit which is completely supplied on single silicon chip.
The main function is getting a complete control of all the computer function. Some of the major
manufacturers are AMD and Intel. System programs come up with set of instruction. In this the
program aim to run and the given processor aim to carry the whole instruction in proper way.
Some of the typical instructions are arithmetic, logical and Move. In the move function, the data
is completely moving from one place to another within the system. Required memory is needed
by processor for carrying out instruction.
The speed of processor is measured in either megahertz and gigahertz. The overall speed
of the clock is responsible for controlling the fact that how fast it can be executed. 1 MHz means
that 1 million clock will tick on every second basis. In the present multi-core processor, there are
around two, three and four processors available on single chip. Random access memory is known
to be computer memory where data programs are completely held in RAM. It is a volatile
memory which is lost when the system is turned off. The present used technology is DDR
(Double data ram) which are mainly of three types that are DDR1, DDR2 and DDR3.
Motherboard is also known as system board which comes up with a main circuit board for the
given system. All the given devices in system is considered to be a part of motherboard. There
are different kind of processor which need different socket so a particular motherboard needs to
be chosen for suiting the processor.
Chipset is needed for controlling the flow of data around the whole system. It comes up
with two kind of chips that is Northbridge and Southbridge. In Northbridge, the flow of data is
seen between memory and processor. It merely highlights the flow of data between processor
and required graphic card. In Southbridge, the flow of data is seen between devices that is USB,
SATA, LAN. It mainly aims to control PCI slot and on-board graphics. A bus can be stated as a
path by which data can be easily sent to various parts of the given computer system. System
power supply comes up with huge number of function like converting alternating current to
direct current.
Different types of memory, roles and attaching memory to the processor
Memory is considered to be an important part of system. Without any kind of need for
memory, a system becomes tough to use. Memory aims to play a key role in both retrieving and
saving of data (Yan, Song and Wu 2016). The overall performance of system mainly depends on
the size of the memory.
required for protecting some of delicate mechanical and electronic devices from any kind of
damage. Some of the included devices are motherboard, disk driver, Ports and expansion cards.
Peripherals are devices which are connected to the given system by using cables or wireless
based technologies. Some of the typical peripherals like monitor, printer, scanner and speakers.
Processor come up with integrated circuit which is completely supplied on single silicon chip.
The main function is getting a complete control of all the computer function. Some of the major
manufacturers are AMD and Intel. System programs come up with set of instruction. In this the
program aim to run and the given processor aim to carry the whole instruction in proper way.
Some of the typical instructions are arithmetic, logical and Move. In the move function, the data
is completely moving from one place to another within the system. Required memory is needed
by processor for carrying out instruction.
The speed of processor is measured in either megahertz and gigahertz. The overall speed
of the clock is responsible for controlling the fact that how fast it can be executed. 1 MHz means
that 1 million clock will tick on every second basis. In the present multi-core processor, there are
around two, three and four processors available on single chip. Random access memory is known
to be computer memory where data programs are completely held in RAM. It is a volatile
memory which is lost when the system is turned off. The present used technology is DDR
(Double data ram) which are mainly of three types that are DDR1, DDR2 and DDR3.
Motherboard is also known as system board which comes up with a main circuit board for the
given system. All the given devices in system is considered to be a part of motherboard. There
are different kind of processor which need different socket so a particular motherboard needs to
be chosen for suiting the processor.
Chipset is needed for controlling the flow of data around the whole system. It comes up
with two kind of chips that is Northbridge and Southbridge. In Northbridge, the flow of data is
seen between memory and processor. It merely highlights the flow of data between processor
and required graphic card. In Southbridge, the flow of data is seen between devices that is USB,
SATA, LAN. It mainly aims to control PCI slot and on-board graphics. A bus can be stated as a
path by which data can be easily sent to various parts of the given computer system. System
power supply comes up with huge number of function like converting alternating current to
direct current.
Different types of memory, roles and attaching memory to the processor
Memory is considered to be an important part of system. Without any kind of need for
memory, a system becomes tough to use. Memory aims to play a key role in both retrieving and
saving of data (Yan, Song and Wu 2016). The overall performance of system mainly depends on
the size of the memory.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
4GLOBAL HARDWARE AND SOFTWARE
Fig 1: Different Kind of Computer Memory
(Source: Bowden 2016)
Primary Memory: It is considered as the internal memory of the given system. Both
RAM and ROM are considered to be as an important part of primary memory. This particular
memory aims to provide all the working space which is needed in system (Shen et al. 2017).
There are different terms which fall under the primary memory.
Random Access Memory (RAM): Primary storage can be defined as RAM because by
the help of this all the required elements can be selected randomly. The mere focus is all about
selecting and making use of any location in the memory for storing and retrieving data. In the
beginning, the mere focus is all about addressing the memory (Begin and Brandwajn 2016). It
can be stated as read and write memory. The overall storage data and given instruction that is
inside the memory is temporary.
Read-only Memory (ROM): It is another kind of memory in the given system which is
known to be ROM. It is the IC which acts inside the PC that is needed for performance of ROM.
Complete storage of required data and program in the ROM is completely permanent. ROM aims
to store some of the vital programs which is provided by manufacturers for its operation for
given system (Dumais et al. 2016). The CPU can read ROM but it cannot be changed anyhow.
PROM: It stands for programmable read-only which comes up with a chip on which data
is mainly written for once. As soon as, it is written into PROM, it aims to remain their forever. It
can easily retain the overall content as soon as the system is turned off. There is some basic
difference in between PROM and ROM is that PROM is manufactured as blank memory while
ROM is programmed at the instance of manufacturing process. For writing data into PROM, a
special kind of device known as PROM programmer is required in it.
EPROM: It stands for erasable programmable read-only memory. It comes up with
special kind of memory which can retain its content until and unless it is exposed to ultraviolet
light. Reprogram of memory is possible as the ultraviolet light clears all almost the whole
content.
EEPROM: It stands forelectrically erasable programmable read-only memory. It is
considered to be special kind of PROM that can be erased by erasing its electrical charges. This
particular type of memory can retain its overall content even if there is power off.
NVRAM: It can be stated as a category of Random Access Memory, which can store data
even if there is power is switched off. It makes use of tiny 24 pin dual inline packages which is
Fig 1: Different Kind of Computer Memory
(Source: Bowden 2016)
Primary Memory: It is considered as the internal memory of the given system. Both
RAM and ROM are considered to be as an important part of primary memory. This particular
memory aims to provide all the working space which is needed in system (Shen et al. 2017).
There are different terms which fall under the primary memory.
Random Access Memory (RAM): Primary storage can be defined as RAM because by
the help of this all the required elements can be selected randomly. The mere focus is all about
selecting and making use of any location in the memory for storing and retrieving data. In the
beginning, the mere focus is all about addressing the memory (Begin and Brandwajn 2016). It
can be stated as read and write memory. The overall storage data and given instruction that is
inside the memory is temporary.
Read-only Memory (ROM): It is another kind of memory in the given system which is
known to be ROM. It is the IC which acts inside the PC that is needed for performance of ROM.
Complete storage of required data and program in the ROM is completely permanent. ROM aims
to store some of the vital programs which is provided by manufacturers for its operation for
given system (Dumais et al. 2016). The CPU can read ROM but it cannot be changed anyhow.
PROM: It stands for programmable read-only which comes up with a chip on which data
is mainly written for once. As soon as, it is written into PROM, it aims to remain their forever. It
can easily retain the overall content as soon as the system is turned off. There is some basic
difference in between PROM and ROM is that PROM is manufactured as blank memory while
ROM is programmed at the instance of manufacturing process. For writing data into PROM, a
special kind of device known as PROM programmer is required in it.
EPROM: It stands for erasable programmable read-only memory. It comes up with
special kind of memory which can retain its content until and unless it is exposed to ultraviolet
light. Reprogram of memory is possible as the ultraviolet light clears all almost the whole
content.
EEPROM: It stands forelectrically erasable programmable read-only memory. It is
considered to be special kind of PROM that can be erased by erasing its electrical charges. This
particular type of memory can retain its overall content even if there is power off.
NVRAM: It can be stated as a category of Random Access Memory, which can store data
even if there is power is switched off. It makes use of tiny 24 pin dual inline packages which is
5GLOBAL HARDWARE AND SOFTWARE
needed for integrating chip. This merely helps in gaining power which is needed for the
functioning of CMOS.
Flash Memory: It can be defined as a non-volatile chip which is needed for storage and
data transfer between digital devices and system. It comes up with the ability to be
reprogrammed and easily erased. It can be stated as a kind of EEPROM. This kind of memory is
used in USB flash drives, digital camera and solid state devices.
Virtual Memory: This can be defined as a storage allocation based scheme in which
secondary memory can be easily addressed. It is considered to be a part of the whole main
memory. The present address of the program can easily make use of reference memory which is
completely distinguished from the memory address.
Secondary Memory: Secondary memory is known to be external and completely
permanent. Secondary memory is very much concerned with some magic memory. The given
secondary memory can be easily stored on large number of media devices like magnetic tapes
and floppy disk.
Magnetic Tapes: These particular tapes are mainly used for different system where large
data volume is being collected for much larger time. The overall cost of collecting data is
inexpensive. Tapes come up with magnetic material which has all the required data. Deck is
completely connected to central processor, and all the required information are completely read
from the given tape by the help of processor.
Magnetic Disk: Magnetic disk is mainly used in system that is made on the same given
principle. It tends to rotate at a much speed inside the system drive. Data is completely stored on
the given surface of the given disk (Silberschatz, Gagne and Galvin 2018). At present, magnetic
disk is considered to be much popular for getting direct access to the various storage device.
Each of the given disks mainly comprises of number of centric circles which are known as track.
Computer ports can be stated as interface which exist between system and peripheral
devices. It is mainly found on the backside of the computer but this also built on the front side of
the system for getting an easy access. Serial ports come up with 9-pin ports which is also called
com1 and Com2. Both mice and external computers are generally connected to these ports. On
the contrary, parallel port is 25-pin port which is connected to scanners, printers and external
hard drives. They are mainly defined as LPT ports that is LPT1 and LPT2.
There are mainly two kind of memory that is secondary memory and primary memory.
Primary memory mainly comprises of RAM, ROM, Cache memory, virtual memory and lastly
Hybrid memory. System mainly makes use of input and output channel so that they can easily
access the secondary channel. In this, transfer of required data takes place by making use of
immediate zone for primary storage. Secondary memory does lose the required data when the
given device for powered down that is it non-volatile in nature. Primary storage that is main
memory or internal memory is often easily accessible to CPU. The main feature of this
instruction is to read and execute it in proper way. Main memory can be connected either direct
and indirect way to the CPU bus through memory bus.
RAM comes up with two important kind of device like static RAM and dynamic RAM.
The major difference between the two is the lifetime of data which is being stored. SRAM can
easily retain whole of the content till the electrical power is being supplied. If anyhow the power
is turned off, then its content will be lost for forever. SRAM can retain the electrical power when
needed for integrating chip. This merely helps in gaining power which is needed for the
functioning of CMOS.
Flash Memory: It can be defined as a non-volatile chip which is needed for storage and
data transfer between digital devices and system. It comes up with the ability to be
reprogrammed and easily erased. It can be stated as a kind of EEPROM. This kind of memory is
used in USB flash drives, digital camera and solid state devices.
Virtual Memory: This can be defined as a storage allocation based scheme in which
secondary memory can be easily addressed. It is considered to be a part of the whole main
memory. The present address of the program can easily make use of reference memory which is
completely distinguished from the memory address.
Secondary Memory: Secondary memory is known to be external and completely
permanent. Secondary memory is very much concerned with some magic memory. The given
secondary memory can be easily stored on large number of media devices like magnetic tapes
and floppy disk.
Magnetic Tapes: These particular tapes are mainly used for different system where large
data volume is being collected for much larger time. The overall cost of collecting data is
inexpensive. Tapes come up with magnetic material which has all the required data. Deck is
completely connected to central processor, and all the required information are completely read
from the given tape by the help of processor.
Magnetic Disk: Magnetic disk is mainly used in system that is made on the same given
principle. It tends to rotate at a much speed inside the system drive. Data is completely stored on
the given surface of the given disk (Silberschatz, Gagne and Galvin 2018). At present, magnetic
disk is considered to be much popular for getting direct access to the various storage device.
Each of the given disks mainly comprises of number of centric circles which are known as track.
Computer ports can be stated as interface which exist between system and peripheral
devices. It is mainly found on the backside of the computer but this also built on the front side of
the system for getting an easy access. Serial ports come up with 9-pin ports which is also called
com1 and Com2. Both mice and external computers are generally connected to these ports. On
the contrary, parallel port is 25-pin port which is connected to scanners, printers and external
hard drives. They are mainly defined as LPT ports that is LPT1 and LPT2.
There are mainly two kind of memory that is secondary memory and primary memory.
Primary memory mainly comprises of RAM, ROM, Cache memory, virtual memory and lastly
Hybrid memory. System mainly makes use of input and output channel so that they can easily
access the secondary channel. In this, transfer of required data takes place by making use of
immediate zone for primary storage. Secondary memory does lose the required data when the
given device for powered down that is it non-volatile in nature. Primary storage that is main
memory or internal memory is often easily accessible to CPU. The main feature of this
instruction is to read and execute it in proper way. Main memory can be connected either direct
and indirect way to the CPU bus through memory bus.
RAM comes up with two important kind of device like static RAM and dynamic RAM.
The major difference between the two is the lifetime of data which is being stored. SRAM can
easily retain whole of the content till the electrical power is being supplied. If anyhow the power
is turned off, then its content will be lost for forever. SRAM can retain the electrical power when
6GLOBAL HARDWARE AND SOFTWARE
the power is being applied to the given chip. DRAM comes up with short data lifetime that is
timespan of four milliseconds. Memory of ROM is quite distinguishable by the kind of method
which is used for writing new data into them. The overall classification highlights the evolution
of ROM devices that is programmable to erasable one. EPROM stands for erasable and
programmable ROM which is programmed in exact way like PROM. EPROM can be easily
erased and programmed in repeatable. Hybrid memory devices come with the combine feature of
RAM and ROM that does not belong either of the group. This type of memory can be easily used
for reading and writing like RAM. Flash memory comes up with features which is needed by
memory devices. This kind of memory devices come up with properties like low cost, non-
volatile and high-density. Cache memory is mainly used by the central processing unit of the
system so that it can reduce the overall average time for accessing memory. This particular cache
is considered smaller and faster memory that aims to store copy of data.
Compare the roles played by different types of memory
the power is being applied to the given chip. DRAM comes up with short data lifetime that is
timespan of four milliseconds. Memory of ROM is quite distinguishable by the kind of method
which is used for writing new data into them. The overall classification highlights the evolution
of ROM devices that is programmable to erasable one. EPROM stands for erasable and
programmable ROM which is programmed in exact way like PROM. EPROM can be easily
erased and programmed in repeatable. Hybrid memory devices come with the combine feature of
RAM and ROM that does not belong either of the group. This type of memory can be easily used
for reading and writing like RAM. Flash memory comes up with features which is needed by
memory devices. This kind of memory devices come up with properties like low cost, non-
volatile and high-density. Cache memory is mainly used by the central processing unit of the
system so that it can reduce the overall average time for accessing memory. This particular cache
is considered smaller and faster memory that aims to store copy of data.
Compare the roles played by different types of memory
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
7GLOBAL HARDWARE AND SOFTWARE
RAM (Random Access Memory) ROM (Read Only Memory)
It is the memory which is there in the
operating system, process and programs. It
is needed for running the system.
It is the memory which comes with the
system that has pre-written instruction. This
is mainly done for booting up the whole
system.
It requires the proper flow of electricity for
retaining data.
It can retain data without any kind of flow
of electricity when the system is switched
off.
It is a kind of volatile memory. Data in the
RAM is not permanently stored. As soon as
the system is turned off, the system deletes
the data which is stored in RAM.
It is a kind of non-volatile memory. Data in
the ROM is written permanently and is not
erased even if the system is powered off.
The given description aims to highlight about terminology and complete details of IEEE
754 binary for representing floating numbers. The discussion aims to confine for double and
single double precocious format.
The real number is binary which is provided in the given format that is
ImIm-1…I2I1I0.F1F2…FnFn-1
In this given scene, Im and Fn will be either 0 or 1 for both fraction and integer parts
respectively. A finite number is generally provided by four integer parts that are sign (s), base (b)
and significance (m) and lastly exponent (e). Based on the number and number bits which is
needed for encoding different components. The IEEE standard 754 is defined as the five basic
formats. In the given five formats, the two formats that are binary32 and binary64 is mainly used
like a single a single-precision that comes up with a base 2.
The rational number 9/2 can be easily converted into a single precious for which the
floating format is found to be:
9(10) ÷ 2(10) = 4.5(10) = 100.1(2)
The result will be normalized if it leads to a representation of a leading bit of 1 bit that is
1.001(2) x 22. Removal of implied 1 on the left side will give the floating number. A normalized
number aim to provide much more accuracy in comparison to de-normalized number. It merely
implies a proper bit can be easily used for representing the number which is very much accurate.
This is defined as the sub-normal representation. The floating-point number is provided in a
normalized way.
Subnormal number aims to fall under the category of de-centralized number. The sub-
normal representation will ultimately reduce the exponential range and cannot be normalized
easily. This will ultimately reduce the usage of exponent that does not fit the field. Sub-normal
number are considered to be minimum when they have no room for zero bit in the field of
fraction when compared to normal number.
RAM (Random Access Memory) ROM (Read Only Memory)
It is the memory which is there in the
operating system, process and programs. It
is needed for running the system.
It is the memory which comes with the
system that has pre-written instruction. This
is mainly done for booting up the whole
system.
It requires the proper flow of electricity for
retaining data.
It can retain data without any kind of flow
of electricity when the system is switched
off.
It is a kind of volatile memory. Data in the
RAM is not permanently stored. As soon as
the system is turned off, the system deletes
the data which is stored in RAM.
It is a kind of non-volatile memory. Data in
the ROM is written permanently and is not
erased even if the system is powered off.
The given description aims to highlight about terminology and complete details of IEEE
754 binary for representing floating numbers. The discussion aims to confine for double and
single double precocious format.
The real number is binary which is provided in the given format that is
ImIm-1…I2I1I0.F1F2…FnFn-1
In this given scene, Im and Fn will be either 0 or 1 for both fraction and integer parts
respectively. A finite number is generally provided by four integer parts that are sign (s), base (b)
and significance (m) and lastly exponent (e). Based on the number and number bits which is
needed for encoding different components. The IEEE standard 754 is defined as the five basic
formats. In the given five formats, the two formats that are binary32 and binary64 is mainly used
like a single a single-precision that comes up with a base 2.
The rational number 9/2 can be easily converted into a single precious for which the
floating format is found to be:
9(10) ÷ 2(10) = 4.5(10) = 100.1(2)
The result will be normalized if it leads to a representation of a leading bit of 1 bit that is
1.001(2) x 22. Removal of implied 1 on the left side will give the floating number. A normalized
number aim to provide much more accuracy in comparison to de-normalized number. It merely
implies a proper bit can be easily used for representing the number which is very much accurate.
This is defined as the sub-normal representation. The floating-point number is provided in a
normalized way.
Subnormal number aims to fall under the category of de-centralized number. The sub-
normal representation will ultimately reduce the exponential range and cannot be normalized
easily. This will ultimately reduce the usage of exponent that does not fit the field. Sub-normal
number are considered to be minimum when they have no room for zero bit in the field of
fraction when compared to normal number.
8GLOBAL HARDWARE AND SOFTWARE
The exponent field is expected to be 2 that is encoded like 129 (127+2) which is defined
as a biased component. The complete exponent field is based on a binary format which
highlights the negative exponents which are encoding. Biased component comes up with benefits
over the negative exponent which is needed for bitwise comparison for floating numbers.
In floating point, accuracy is completely represented by number of bits where the range is
limited by exponent. Real numbers can be represented by the floating point numbers. In the
given number which is not floating float point number, there are only two options which is
available for floating point approximation. The closet floating point is considered to be less than
x. The closest floating point number is much greater than the value x.
Overflow tends to occur when there is true result of an arithmetic operation is found to be
finite in nature but large in magnitude. Largest of the floating point number can be easily stored
by making use of precision. Underflow tends to occur when the true result of the arithmetic
operation is much smaller in magnitude. It is much smaller that the normalized floating point
number that can be easily stored.
IEEE 754 is defined like a binary floating point format. The architecture aims to provide
the left hardware manufacture. The complete storage order for the individual byte in the binary
floating point aim to vary from architecture to architecture.
P3: Representing of different types of data in a computer system
Binary Number: Binary number system can be state as a positive number system that has
a base two. Binary number system comprises of two different number that is zero and one.
The exponent field is expected to be 2 that is encoded like 129 (127+2) which is defined
as a biased component. The complete exponent field is based on a binary format which
highlights the negative exponents which are encoding. Biased component comes up with benefits
over the negative exponent which is needed for bitwise comparison for floating numbers.
In floating point, accuracy is completely represented by number of bits where the range is
limited by exponent. Real numbers can be represented by the floating point numbers. In the
given number which is not floating float point number, there are only two options which is
available for floating point approximation. The closet floating point is considered to be less than
x. The closest floating point number is much greater than the value x.
Overflow tends to occur when there is true result of an arithmetic operation is found to be
finite in nature but large in magnitude. Largest of the floating point number can be easily stored
by making use of precision. Underflow tends to occur when the true result of the arithmetic
operation is much smaller in magnitude. It is much smaller that the normalized floating point
number that can be easily stored.
IEEE 754 is defined like a binary floating point format. The architecture aims to provide
the left hardware manufacture. The complete storage order for the individual byte in the binary
floating point aim to vary from architecture to architecture.
P3: Representing of different types of data in a computer system
Binary Number: Binary number system can be state as a positive number system that has
a base two. Binary number system comprises of two different number that is zero and one.
9GLOBAL HARDWARE AND SOFTWARE
Fig 1: Conversion of Binary Number to Decimal Number
(Source : Created by Author)
Fig 2 : Conversion of Binary Number to Octal Number
(Source: Created by Author)
Fig 1: Conversion of Binary Number to Decimal Number
(Source : Created by Author)
Fig 2 : Conversion of Binary Number to Octal Number
(Source: Created by Author)
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
10GLOBAL HARDWARE AND SOFTWARE
Fig 3: Conversion of Binary Number to Hexadecimal Number
(Source: Created by Author)
Decimal number System: This particular number system comes up with a base of ten as
this makes use of digits in between 0 to 9. In this number system, the required portion are left to
the decimal point to the left point of represent unit.
Fig 3: Conversion of Binary Number to Hexadecimal Number
(Source: Created by Author)
Decimal number System: This particular number system comes up with a base of ten as
this makes use of digits in between 0 to 9. In this number system, the required portion are left to
the decimal point to the left point of represent unit.
11GLOBAL HARDWARE AND SOFTWARE
Fig 4: Conversion of Decimal Number to Binary Number
(Source: Created by Author)
Fig 5: Conversion of Decimal Number to Octal Number
(Source: Created by Author)
Fig 4: Conversion of Decimal Number to Binary Number
(Source: Created by Author)
Fig 5: Conversion of Decimal Number to Octal Number
(Source: Created by Author)
12GLOBAL HARDWARE AND SOFTWARE
Fig 6: Conversion of Decimal Number to hexadecimal Number
(Source: Created by Author)
Octal number system: It can be stated as a type of number representation technique that
comes up with a base value of 8. It merely highlights only 8 symbols are possible that is 0,1, 2, 3,
4, 5, 6 and 7. It merely represent 3 bits to highlight every value of digit.
Fig 7: Conversion of Octal Number to Binary Number
(Source: Created by Author)
Fig 6: Conversion of Decimal Number to hexadecimal Number
(Source: Created by Author)
Octal number system: It can be stated as a type of number representation technique that
comes up with a base value of 8. It merely highlights only 8 symbols are possible that is 0,1, 2, 3,
4, 5, 6 and 7. It merely represent 3 bits to highlight every value of digit.
Fig 7: Conversion of Octal Number to Binary Number
(Source: Created by Author)
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
13GLOBAL HARDWARE AND SOFTWARE
Fig 8: Conversion Octal to Hexadecimal
(Source: Created by Author)
Fig 8: Conversion Octal to Hexadecimal
(Source: Created by Author)
14GLOBAL HARDWARE AND SOFTWARE
Fig 9: Conversion of Octal to Decimal
(Source: Created by Author)
Hexadecimal: This makes use of 16 systems which is needed for simplifying the binary
representation. Hex system can be used for any of the following digit that is 0,1,2,3,4,
5,6,7,8,9,A,B,C,D,E,F. Each of the hex digit aim to reflect on the 4-digit binary sequence.
Fig 9: Conversion of Octal to Decimal
(Source: Created by Author)
Hexadecimal: This makes use of 16 systems which is needed for simplifying the binary
representation. Hex system can be used for any of the following digit that is 0,1,2,3,4,
5,6,7,8,9,A,B,C,D,E,F. Each of the hex digit aim to reflect on the 4-digit binary sequence.
15GLOBAL HARDWARE AND SOFTWARE
Fig 10: Hexadecimal to BinaryNumber
(Source : Created by Author)
Fig 11: Hexadecimal to Octal Number
(Source: Created by Author)
Fig 10: Hexadecimal to BinaryNumber
(Source : Created by Author)
Fig 11: Hexadecimal to Octal Number
(Source: Created by Author)
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
16GLOBAL HARDWARE AND SOFTWARE
Fig 12: Hexadecimal to Decimal Number
(Source: Created by Author)
Description how processors are connected to the devices using buses and
memory
System bus can be stated as a pathway which comprises of both cables and connectors. It
is mainly needed for connecting data between system micro-processor and memory. The main
focus of bus is all about providing a path of communication for both data and control signal
(Richter, Götzfried and Müller 2016). It is mainly needed for moving in moving in between the
major components of the system. The whole bus aims to work by the help of three major
functions that is data, control and address.
Fig 12: Hexadecimal to Decimal Number
(Source: Created by Author)
Description how processors are connected to the devices using buses and
memory
System bus can be stated as a pathway which comprises of both cables and connectors. It
is mainly needed for connecting data between system micro-processor and memory. The main
focus of bus is all about providing a path of communication for both data and control signal
(Richter, Götzfried and Müller 2016). It is mainly needed for moving in moving in between the
major components of the system. The whole bus aims to work by the help of three major
functions that is data, control and address.
17GLOBAL HARDWARE AND SOFTWARE
Fig 13: Buses connecting Input and Output to both Processor and Memory
(Source: Khoroshilov, Kuliamin and Petrenko 2017)
Bus is considered to be a shared communication link that makes use of set of wires for
connecting various subsystem. In some of the cases, shared buses with memory work like a
separate I/O bus.
Fig 14: Connection between Processor and Memory
(Source: Kanev et al. 2016)
Fig 13: Buses connecting Input and Output to both Processor and Memory
(Source: Khoroshilov, Kuliamin and Petrenko 2017)
Bus is considered to be a shared communication link that makes use of set of wires for
connecting various subsystem. In some of the cases, shared buses with memory work like a
separate I/O bus.
Fig 14: Connection between Processor and Memory
(Source: Kanev et al. 2016)
18GLOBAL HARDWARE AND SOFTWARE
Advantages
Versatility
New devices are needed to add
Peripherals can be easily moved to the system.
Single set of wires which is completely share multiply.
Disadvantages
It aims to create a new kind of communication bottleneck.
The overall bandwidth for the bus can be reduced for the maximum input or output.
Overall length of the bus
Complete number of devices operating on the bus.
There are mainly two kinds of buses that is synchronous and Asynchronous Bus
Synchronous Bus
It is merely inclusive of control lines
Fixed protocol for proper communication which is relative to the clock.
Asynchronous Bus
It is not clocked
It will be accommodated in a wide range of given devices
It mainly requires a handshaking protocol.
Bus Arbitration: Any kind of device which can easily control the bus is defined, bus master
Bus master at making use of bus assert for the bus request.
Bus master cannot make use of bus until and unless a request is granted.
Bus master needs to signal to the arbiter after it has finished the use of bus.
Bus arbitration scheme aims to make balance between the two given factors.
Fig 15: Delegating I/O Responsibility from the CPU: DMA
(Source: Yildiz, Lekesiz and Yildiz 2016)
Direct Memory Access (DMA)
External part of CPU.
It aims to act as a master of the bus.
It is all about transfer of block to or from memory without any kind of intervention of
CPU.
Advantages
Versatility
New devices are needed to add
Peripherals can be easily moved to the system.
Single set of wires which is completely share multiply.
Disadvantages
It aims to create a new kind of communication bottleneck.
The overall bandwidth for the bus can be reduced for the maximum input or output.
Overall length of the bus
Complete number of devices operating on the bus.
There are mainly two kinds of buses that is synchronous and Asynchronous Bus
Synchronous Bus
It is merely inclusive of control lines
Fixed protocol for proper communication which is relative to the clock.
Asynchronous Bus
It is not clocked
It will be accommodated in a wide range of given devices
It mainly requires a handshaking protocol.
Bus Arbitration: Any kind of device which can easily control the bus is defined, bus master
Bus master at making use of bus assert for the bus request.
Bus master cannot make use of bus until and unless a request is granted.
Bus master needs to signal to the arbiter after it has finished the use of bus.
Bus arbitration scheme aims to make balance between the two given factors.
Fig 15: Delegating I/O Responsibility from the CPU: DMA
(Source: Yildiz, Lekesiz and Yildiz 2016)
Direct Memory Access (DMA)
External part of CPU.
It aims to act as a master of the bus.
It is all about transfer of block to or from memory without any kind of intervention of
CPU.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
19GLOBAL HARDWARE AND SOFTWARE
Fig 16 : Block Diagram of DMA Controller
(Source: Dihoru et al. 2019)
Conclusion
In the above pages of the report, an idea has been provided with respect computer system,
which is a combination of hardware and software. It is mainly combined with providing various
kind of functionalities. Hardware is defined as the physical system like memory devices,
keyboard and monitor. In system, software can be stated as a collection of programs which is
needed by hardware so that they can function properly.
Fig 16 : Block Diagram of DMA Controller
(Source: Dihoru et al. 2019)
Conclusion
In the above pages of the report, an idea has been provided with respect computer system,
which is a combination of hardware and software. It is mainly combined with providing various
kind of functionalities. Hardware is defined as the physical system like memory devices,
keyboard and monitor. In system, software can be stated as a collection of programs which is
needed by hardware so that they can function properly.
20GLOBAL HARDWARE AND SOFTWARE
References
Begin, T. and Brandwajn, A., 2016, March. Predicting the system performance by combining
calibrated performance models of its components: a preliminary study. In Proceedings of the 7th
ACM/SPEC on International Conference on Performance Engineering (pp. 95-100). ACM.
Bowden, D., 2016. Development of a large experimental acoustic transmission loss test bench
suitable for large marine diesel exhaust system components. Acoustics 2016.
Comer, D., 2017. Essentials of computer architecture. Chapman and Hall/CRC.
Dihoru, L., Crewe, A.J., Horseman, T., Dietz, M., Oddbjornsson, O., Kloukinas, P., Voyagaki, E.
and Taylor, C.A., 2019. A computer vision approach for dynamic tracking of components in a
nuclear reactor core model. Nuclear Engineering and Design, 344, pp.1-14.
Dumais, S., Cutrell, E., Cadiz, J.J., Jancke, G., Sarin, R. and Robbins, D.C., 2016, January. Stuff
I've seen: a system for personal information retrieval and re-use. In Acm sigir forum(Vol. 49, No.
2, pp. 28-35). ACM.
Kanev, S., Darago, J.P., Hazelwood, K., Ranganathan, P., Moseley, T., Wei, G.Y. and Brooks,
D., 2016. Profiling a warehouse-scale computer. ACM SIGARCH Computer Architecture
News, 43(3), pp.158-169.
Khoroshilov, A.V., Kuliamin, V.V. and Petrenko, A.K., 2017. Verification of Operating System
Components. Synchronizing and Homing Experiments for Input/output Automata, p.11.
Richter, L., Götzfried, J. and Müller, T., 2016, December. Isolating operating system
components with intel SGX. In Proceedings of the 1st Workshop on System Software for Trusted
Execution (p. 8). ACM.
Shen, K., Selezneva, M.S., Neusypin, K.A. and Proletarsky, A.V., 2017. Novel variable structure
measurement system with intelligent components for flight vehicles. Metrology and
measurement systems, 24(2), pp.347-356.
Silberschatz, A., Gagne, G. and Galvin, P.B., 2018. Operating system concepts. Wiley.
Yan, R., Song, Y. and Wu, H., 2016, July. Learning to respond with deep neural networks for
retrieval-based human-computer conversation system. In Proceedings of the 39th International
ACM SIGIR conference on Research and Development in Information Retrieval (pp. 55-64).
ACM.
Yildiz, B.S., Lekesiz, H. and Yildiz, A.R., 2016. Structural design of vehicle components using
gravitational search and charged system search algorithms. Materials Testing, 58(1), pp.79-81.
References
Begin, T. and Brandwajn, A., 2016, March. Predicting the system performance by combining
calibrated performance models of its components: a preliminary study. In Proceedings of the 7th
ACM/SPEC on International Conference on Performance Engineering (pp. 95-100). ACM.
Bowden, D., 2016. Development of a large experimental acoustic transmission loss test bench
suitable for large marine diesel exhaust system components. Acoustics 2016.
Comer, D., 2017. Essentials of computer architecture. Chapman and Hall/CRC.
Dihoru, L., Crewe, A.J., Horseman, T., Dietz, M., Oddbjornsson, O., Kloukinas, P., Voyagaki, E.
and Taylor, C.A., 2019. A computer vision approach for dynamic tracking of components in a
nuclear reactor core model. Nuclear Engineering and Design, 344, pp.1-14.
Dumais, S., Cutrell, E., Cadiz, J.J., Jancke, G., Sarin, R. and Robbins, D.C., 2016, January. Stuff
I've seen: a system for personal information retrieval and re-use. In Acm sigir forum(Vol. 49, No.
2, pp. 28-35). ACM.
Kanev, S., Darago, J.P., Hazelwood, K., Ranganathan, P., Moseley, T., Wei, G.Y. and Brooks,
D., 2016. Profiling a warehouse-scale computer. ACM SIGARCH Computer Architecture
News, 43(3), pp.158-169.
Khoroshilov, A.V., Kuliamin, V.V. and Petrenko, A.K., 2017. Verification of Operating System
Components. Synchronizing and Homing Experiments for Input/output Automata, p.11.
Richter, L., Götzfried, J. and Müller, T., 2016, December. Isolating operating system
components with intel SGX. In Proceedings of the 1st Workshop on System Software for Trusted
Execution (p. 8). ACM.
Shen, K., Selezneva, M.S., Neusypin, K.A. and Proletarsky, A.V., 2017. Novel variable structure
measurement system with intelligent components for flight vehicles. Metrology and
measurement systems, 24(2), pp.347-356.
Silberschatz, A., Gagne, G. and Galvin, P.B., 2018. Operating system concepts. Wiley.
Yan, R., Song, Y. and Wu, H., 2016, July. Learning to respond with deep neural networks for
retrieval-based human-computer conversation system. In Proceedings of the 39th International
ACM SIGIR conference on Research and Development in Information Retrieval (pp. 55-64).
ACM.
Yildiz, B.S., Lekesiz, H. and Yildiz, A.R., 2016. Structural design of vehicle components using
gravitational search and charged system search algorithms. Materials Testing, 58(1), pp.79-81.
21GLOBAL HARDWARE AND SOFTWARE
L02: Discuss how data and programs are represented in the computer system
Introduction
Computer mainly makes use of digit that is either 0 and 1 for storing data. A binary digit
or bit is considered to be smallest unit of given data which is used for computing. It is
completely represented by either two option that is 0 and 1(Hwang and Jotwani 2016). Binary
number is mainly made up of binary digit where the binary number is found to be 1001. The
present circuit in computer system is made up billion number of transistors. In simple words,
transistors can be stated
Discussion
Representation of data and programs in the computer system
Data storage on computer is considered to be very much complex. It can be easily broken
down into three basic processes. First is all about converting data into simple numbers, which
become easy for system to store it (Al-Jarrah et al. 2015). The second thing is number, which is
being recorded by the hardware which is placed inside the system. In the third stage, the numbers
are completely organized so that they can move to temporary storage and is completely
manipulated by programs. Every piece is stored in the computers which aim to store the number.
The letter is completely changed to a number and photographs are changed to a large set of
number. It merely highlights colour and brightness for each of the pixel. The given numbers are
being completely changed to given binary numbers (Sriram and Bhattacharyya 2018). The
conventional numbers make use of ten digits that is from 0-9 so that they can represent all the
possible value. Binary number makes use of two digits that is 0 and one which aims to represent
all the given value. The number 0 by the help of 8 is considered to be similar like binary number
that is 0, 1, 10, 101, 110 and 1000. Binary numbers are considered to be much longer in size. In
the binary number, any value can be easily stored, which is a series of items that can be either
true (1) or false (0).
Primary Data Storage: The main kind of data storage is most of the given system is the
hard drive. It can be stated as the spinning of the disk or the given disk with magnetic required
for coating. The head can be easily read or written by the help of magnetic information. Most of
the home based system makes use of cassette tapes that are mainly needed for data storage
(Merino et al. 2016). A binary numberis completely recorded in the given series of small areas
of disc which is magnetized in either north and south direction.
Other Data storage: New laptops are being used like solid state drives for the given
primary data. It comes up with various memory chips in USB keys and MP3 players. Binary
numbers are being recorded by charging or not even charging for the series of tiny capacitor in
the given chip. At present, electronic data storages are very much rugged in comparison to
magnetic storage (Witten et al . 2016). With the passage of few years, capacitors aim to lose their
overall ability for storage of electric charges.
Temporary Data Storage: Disk, drive and USB storage are mainly used in long data
storage. In the system, there are many areas which are used for short term storage of electronic
data (Macfarlane et al . 2017). At present, small amounts of data are being used for storage in the
keyboard, printer and different section of area.
Conversion of floating point and storage in computer
A floating point number is also known as real number is mainly used for representing a
very small given value. It can also be used for representing very large number and also the
negative number like zero. The floating number is completely provided as scientific notation
L02: Discuss how data and programs are represented in the computer system
Introduction
Computer mainly makes use of digit that is either 0 and 1 for storing data. A binary digit
or bit is considered to be smallest unit of given data which is used for computing. It is
completely represented by either two option that is 0 and 1(Hwang and Jotwani 2016). Binary
number is mainly made up of binary digit where the binary number is found to be 1001. The
present circuit in computer system is made up billion number of transistors. In simple words,
transistors can be stated
Discussion
Representation of data and programs in the computer system
Data storage on computer is considered to be very much complex. It can be easily broken
down into three basic processes. First is all about converting data into simple numbers, which
become easy for system to store it (Al-Jarrah et al. 2015). The second thing is number, which is
being recorded by the hardware which is placed inside the system. In the third stage, the numbers
are completely organized so that they can move to temporary storage and is completely
manipulated by programs. Every piece is stored in the computers which aim to store the number.
The letter is completely changed to a number and photographs are changed to a large set of
number. It merely highlights colour and brightness for each of the pixel. The given numbers are
being completely changed to given binary numbers (Sriram and Bhattacharyya 2018). The
conventional numbers make use of ten digits that is from 0-9 so that they can represent all the
possible value. Binary number makes use of two digits that is 0 and one which aims to represent
all the given value. The number 0 by the help of 8 is considered to be similar like binary number
that is 0, 1, 10, 101, 110 and 1000. Binary numbers are considered to be much longer in size. In
the binary number, any value can be easily stored, which is a series of items that can be either
true (1) or false (0).
Primary Data Storage: The main kind of data storage is most of the given system is the
hard drive. It can be stated as the spinning of the disk or the given disk with magnetic required
for coating. The head can be easily read or written by the help of magnetic information. Most of
the home based system makes use of cassette tapes that are mainly needed for data storage
(Merino et al. 2016). A binary numberis completely recorded in the given series of small areas
of disc which is magnetized in either north and south direction.
Other Data storage: New laptops are being used like solid state drives for the given
primary data. It comes up with various memory chips in USB keys and MP3 players. Binary
numbers are being recorded by charging or not even charging for the series of tiny capacitor in
the given chip. At present, electronic data storages are very much rugged in comparison to
magnetic storage (Witten et al . 2016). With the passage of few years, capacitors aim to lose their
overall ability for storage of electric charges.
Temporary Data Storage: Disk, drive and USB storage are mainly used in long data
storage. In the system, there are many areas which are used for short term storage of electronic
data (Macfarlane et al . 2017). At present, small amounts of data are being used for storage in the
keyboard, printer and different section of area.
Conversion of floating point and storage in computer
A floating point number is also known as real number is mainly used for representing a
very small given value. It can also be used for representing very large number and also the
negative number like zero. The floating number is completely provided as scientific notation
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
22GLOBAL HARDWARE AND SOFTWARE
which has a fraction and exponent that has a certain radix which is in the form of F×r^E. The
given decimal number makes use of radix of value ten on the contrary the binary number makes
use of radix of 2 (F×2^E). Proper representation of the floating point is not considered to be that
much unique (Coronel and Morris 2016). The given numbers 55.66 can be easily expressed as
5.5668*10^1 and many other from. The given fractional form can be easily normalized where
there is a single non-zero value before the radix point. The point should be taken into that
floating point number tends to suffer loss of precision that comes up with a fixed bit value. It is
mainly done because infinite numbers of the given real numbers. The second aspect that n-binary
patterns are mainly provided in the finite 2^n distinct number. In this scenario, not all of the real
numbers are provided or represented (Pan, Morris and Adhikari 2015). In this case, the nearest
approximation will be used that will ultimately result in loss of overall accuracy. The point needs
to take into account floating number are much less efficient in comparison to the integers. It is
mainly needed for speeding up the whole thing which is called the floating point of co-processor.
In this case, the use of integer in the application does not any kind of floating point numbers.
Fig 5: Floating point number representation
(Source: Abadi et al. 2016)
Boolean logical operation inclusive of adder circuit used for adding binary
numbers
In electronics circuit, adder circuit aims to perform addition of the binary number in
different system and associated processor. Adder circuit is not only used in various processor and
ALU, which is needed for calculating the increment and decrement based operation. It mainly
comprises of addressed and table addresses. An adder circuit is mainly needed for generating
sum and carry like the output (Johnson et al . 2017). The main goal is all about addresses which
are used in various available formats like binary code decimal and lastly grey code. There are
many instances where one or two’s complement are being used for specifying the given negative
numbers. It is very much small to make alteration adder to the given subcontractor. Complex
adder is being used for representing the other signed number. At present, there is various
application of adder circuit which is not only used by binary number but is used for digital
application like table index, calculation and decoding (Krishnan et al . 2016). Adder circuit is
mainly of two kinds that are half adder and full adder circuit.
Half Adder Circuit: It is mainly used for adding the sum of two number that is A and B.
Half adder come up with two option that is sum and carry where the sum is represented by ‘S’
and carry is represented by ‘C’. Carry aims to represent the overall flow of the following digit in
the case of multi-digit addition (Nash 2018). The overall value stands at 2C+S. In the diagram
given below, the simplest of half adder is given below. Half adder is mainly used for adding two
input and generation of a sum and carry which is the output. Input variable of the half adder is
stated as augend bits and added bits. In the given scenario, output variables is stated as sum and
carry.
which has a fraction and exponent that has a certain radix which is in the form of F×r^E. The
given decimal number makes use of radix of value ten on the contrary the binary number makes
use of radix of 2 (F×2^E). Proper representation of the floating point is not considered to be that
much unique (Coronel and Morris 2016). The given numbers 55.66 can be easily expressed as
5.5668*10^1 and many other from. The given fractional form can be easily normalized where
there is a single non-zero value before the radix point. The point should be taken into that
floating point number tends to suffer loss of precision that comes up with a fixed bit value. It is
mainly done because infinite numbers of the given real numbers. The second aspect that n-binary
patterns are mainly provided in the finite 2^n distinct number. In this scenario, not all of the real
numbers are provided or represented (Pan, Morris and Adhikari 2015). In this case, the nearest
approximation will be used that will ultimately result in loss of overall accuracy. The point needs
to take into account floating number are much less efficient in comparison to the integers. It is
mainly needed for speeding up the whole thing which is called the floating point of co-processor.
In this case, the use of integer in the application does not any kind of floating point numbers.
Fig 5: Floating point number representation
(Source: Abadi et al. 2016)
Boolean logical operation inclusive of adder circuit used for adding binary
numbers
In electronics circuit, adder circuit aims to perform addition of the binary number in
different system and associated processor. Adder circuit is not only used in various processor and
ALU, which is needed for calculating the increment and decrement based operation. It mainly
comprises of addressed and table addresses. An adder circuit is mainly needed for generating
sum and carry like the output (Johnson et al . 2017). The main goal is all about addresses which
are used in various available formats like binary code decimal and lastly grey code. There are
many instances where one or two’s complement are being used for specifying the given negative
numbers. It is very much small to make alteration adder to the given subcontractor. Complex
adder is being used for representing the other signed number. At present, there is various
application of adder circuit which is not only used by binary number but is used for digital
application like table index, calculation and decoding (Krishnan et al . 2016). Adder circuit is
mainly of two kinds that are half adder and full adder circuit.
Half Adder Circuit: It is mainly used for adding the sum of two number that is A and B.
Half adder come up with two option that is sum and carry where the sum is represented by ‘S’
and carry is represented by ‘C’. Carry aims to represent the overall flow of the following digit in
the case of multi-digit addition (Nash 2018). The overall value stands at 2C+S. In the diagram
given below, the simplest of half adder is given below. Half adder is mainly used for adding two
input and generation of a sum and carry which is the output. Input variable of the half adder is
stated as augend bits and added bits. In the given scenario, output variables is stated as sum and
carry.
23GLOBAL HARDWARE AND SOFTWARE
Fig 6: Half adder Circuit
(Source: Pan, Morris and Adhikari 2015)
Full adder Circuit: It is mainly used for adding three binary input numbers. Proper
implementation of full adder is considered to be very much difficult in comparison to half adder.
Full adder comes up with three inputs and two outputs where the inputs are A, B and C is the
carry-in (Abadi et al . 2016). The overall output is considered to be sum and carry out for
preceding operation of digit. Full adder aims to two bits of output which are completely denoted
for digit operation. Full adder comes up with a two-bit output which is denoted with signal that is
S and Cout. The overall sum stands at 2XCout+S.
Fig 7: Full adder Circuit
(Source: Johnson et al. 2017)
ICE Target system debug operate in system
ICE can be stated as a hardware interface which help the programmer to make the
necessary changes in the debug for software in the given system. ICE is mainly installed in
between the embedded system and the given external terminal. It is mainly done so that they can
easily observe and make necessary alteration in the given embedded system. It is not displayed
or does not have a key of its own (Sriram and Bhattacharyya 2018). ICE can be stated as a
debugging tool which helps the individual so that they can get access to the target MCU for
depth debugging. ICE aims to remove the micro-controller and can be inserted in the ICE in the
given place. It is mainly done by making use of adapter. In the present circuit emulation, which
is seen these days as a result of high-performance and relatively much lower cost. It is mainly
done because of the fact ICE requires to invisible to the whole system (Merino et al . 2016). It is
considered to be extremely fast and memory extensive chips. ICE comes up with hardware board
which tends to accompany the given system from the host. The debugger on the given host will
create a host connected to the MCU through the help of ICE. This device allows the developer so
Fig 6: Half adder Circuit
(Source: Pan, Morris and Adhikari 2015)
Full adder Circuit: It is mainly used for adding three binary input numbers. Proper
implementation of full adder is considered to be very much difficult in comparison to half adder.
Full adder comes up with three inputs and two outputs where the inputs are A, B and C is the
carry-in (Abadi et al . 2016). The overall output is considered to be sum and carry out for
preceding operation of digit. Full adder aims to two bits of output which are completely denoted
for digit operation. Full adder comes up with a two-bit output which is denoted with signal that is
S and Cout. The overall sum stands at 2XCout+S.
Fig 7: Full adder Circuit
(Source: Johnson et al. 2017)
ICE Target system debug operate in system
ICE can be stated as a hardware interface which help the programmer to make the
necessary changes in the debug for software in the given system. ICE is mainly installed in
between the embedded system and the given external terminal. It is mainly done so that they can
easily observe and make necessary alteration in the given embedded system. It is not displayed
or does not have a key of its own (Sriram and Bhattacharyya 2018). ICE can be stated as a
debugging tool which helps the individual so that they can get access to the target MCU for
depth debugging. ICE aims to remove the micro-controller and can be inserted in the ICE in the
given place. It is mainly done by making use of adapter. In the present circuit emulation, which
is seen these days as a result of high-performance and relatively much lower cost. It is mainly
done because of the fact ICE requires to invisible to the whole system (Merino et al . 2016). It is
considered to be extremely fast and memory extensive chips. ICE comes up with hardware board
which tends to accompany the given system from the host. The debugger on the given host will
create a host connected to the MCU through the help of ICE. This device allows the developer so
24GLOBAL HARDWARE AND SOFTWARE
that they can view the data and signal that are internally present in MCU. It can be stated as the
stepping stone through the given source code (Witten et al . 2016). The debugging is done
through hardware and not by the help of software. MCU performance is completely left for intact
in most of the part. ICE does not have any kind of MCU based resources. This particular kind of
debugging can be stated as source-level or run time debugging except for the ICE in some real-
scenarios. The complete behaviour of the MCU will be reflected in real-time and ICE.
Conclusion
The above report help in coming to the point that the report is about data and programs.
The way in which it is represented in the given system. The report mainly comprises of various
kind of data that is floating point conversion and its storage in the system. It is inclusive of
Boolean logic operation that is inclusive of adder circuit for binary adding numbers. The last part
of the report is all about debug operation in the system.
that they can view the data and signal that are internally present in MCU. It can be stated as the
stepping stone through the given source code (Witten et al . 2016). The debugging is done
through hardware and not by the help of software. MCU performance is completely left for intact
in most of the part. ICE does not have any kind of MCU based resources. This particular kind of
debugging can be stated as source-level or run time debugging except for the ICE in some real-
scenarios. The complete behaviour of the MCU will be reflected in real-time and ICE.
Conclusion
The above report help in coming to the point that the report is about data and programs.
The way in which it is represented in the given system. The report mainly comprises of various
kind of data that is floating point conversion and its storage in the system. It is inclusive of
Boolean logic operation that is inclusive of adder circuit for binary adding numbers. The last part
of the report is all about debug operation in the system.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
25GLOBAL HARDWARE AND SOFTWARE
References
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving,
G., Isard, M. and Kudlur, M., 2016. Tensorflow: A system for large-scale machine learning.
In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}
16) (pp. 265-283).
Al-Jarrah, O.Y., Yoo, P.D., Muhaidat, S., Karagiannidis, G.K. and Taha, K., 2015. Efficient
machine learning for big data: A review. Big Data Research, 2(3), pp.87-93.
Coronel, C. and Morris, S., 2016. Database systems: design, implementation, & management.
Cengage Learning.
Hwang, K. and Jotwani, N., 2016. Advanced Computer Architecture, 3e. McGraw-Hill
Education.
Johnson, J., Hariharan, B., van der Maaten, L., Hoffman, J., Fei-Fei, L., Lawrence Zitnick, C.
and Girshick, R., 2017. Inferring and executing programs for visual reasoning. In Proceedings of
the IEEE International Conference on Computer Vision (pp. 2989-2998).
Krishnan, D.R., Quoc, D.L., Bhatotia, P., Fetzer, C. and Rodrigues, R., 2016, April. Incapprox:
A data analytics system for incremental approximate computing. In Proceedings of the 25th
International Conference on World Wide Web (pp. 1133-1144). International World Wide Web
Conferences Steering Committee.
Macfarlane, R., Muir, D.W., Boicourt, R.M., Kahler III, A.C. and Conlin, J.L., 2017. The NJOY
Nuclear Data Processing System, Version 2016 (No. LA-UR-17-20093). Los Alamos National
Lab.(LANL), Los Alamos, NM (United States).
Merino, J., Caballero, I., Rivas, B., Serrano, M. and Piattini, M., 2016. A data quality in use
model for big data. Future Generation Computer Systems, 63, pp.123-130.
Nash, J.C., 2018. Compact numerical methods for computers: linear algebra and function
minimisation. Routledge.
Pan, S., Morris, T. and Adhikari, U., 2015. Developing a hybrid intrusion detection system using
data mining for power systems. IEEE Transactions on Smart Grid, 6(6), pp.3104-3113.
Sriram, S. and Bhattacharyya, S.S., 2018. Embedded multiprocessors: Scheduling and
synchronization. CRC press.
Witten, I.H., Frank, E., Hall, M.A. and Pal, C.J., 2016. Data Mining: Practical machine learning
tools and techniques. Morgan Kaufmann.
References
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving,
G., Isard, M. and Kudlur, M., 2016. Tensorflow: A system for large-scale machine learning.
In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI}
16) (pp. 265-283).
Al-Jarrah, O.Y., Yoo, P.D., Muhaidat, S., Karagiannidis, G.K. and Taha, K., 2015. Efficient
machine learning for big data: A review. Big Data Research, 2(3), pp.87-93.
Coronel, C. and Morris, S., 2016. Database systems: design, implementation, & management.
Cengage Learning.
Hwang, K. and Jotwani, N., 2016. Advanced Computer Architecture, 3e. McGraw-Hill
Education.
Johnson, J., Hariharan, B., van der Maaten, L., Hoffman, J., Fei-Fei, L., Lawrence Zitnick, C.
and Girshick, R., 2017. Inferring and executing programs for visual reasoning. In Proceedings of
the IEEE International Conference on Computer Vision (pp. 2989-2998).
Krishnan, D.R., Quoc, D.L., Bhatotia, P., Fetzer, C. and Rodrigues, R., 2016, April. Incapprox:
A data analytics system for incremental approximate computing. In Proceedings of the 25th
International Conference on World Wide Web (pp. 1133-1144). International World Wide Web
Conferences Steering Committee.
Macfarlane, R., Muir, D.W., Boicourt, R.M., Kahler III, A.C. and Conlin, J.L., 2017. The NJOY
Nuclear Data Processing System, Version 2016 (No. LA-UR-17-20093). Los Alamos National
Lab.(LANL), Los Alamos, NM (United States).
Merino, J., Caballero, I., Rivas, B., Serrano, M. and Piattini, M., 2016. A data quality in use
model for big data. Future Generation Computer Systems, 63, pp.123-130.
Nash, J.C., 2018. Compact numerical methods for computers: linear algebra and function
minimisation. Routledge.
Pan, S., Morris, T. and Adhikari, U., 2015. Developing a hybrid intrusion detection system using
data mining for power systems. IEEE Transactions on Smart Grid, 6(6), pp.3104-3113.
Sriram, S. and Bhattacharyya, S.S., 2018. Embedded multiprocessors: Scheduling and
synchronization. CRC press.
Witten, I.H., Frank, E., Hall, M.A. and Pal, C.J., 2016. Data Mining: Practical machine learning
tools and techniques. Morgan Kaufmann.
26GLOBAL HARDWARE AND SOFTWARE
LO4: Investigate advance computer architecture and performance
Executive summary
The following study deals with the investigation of various computer performances and
architectures. The analysis considers a company that is highly desperate to enhance their
computer system performances. Here, the manager is asked to generate a report over the superior
performance and architecture of computers. In this evaluation, the activities of the DirectX API
is demonstrated. This involves the benefits and drawbacks and the ways the DirectX API can
control the functions of graphics is discussed here. Next, there is a critical analysis of the ways to
develop computer performances with the help of two architectures. These are, namely Pipelining
and MIMD.
LO4 Investigating the case of advanced computer architectures along with their
performance
Introduction:
The computer performance denotes the number of tasks that have been accomplished by
any system. This indicates how well the machines must perform their duties are supposed to
show. It primarily relies on throughputs, execution time and response time of the computer
systems. On the other hand, the computer architectures denote the various set of methods and
rules. This is helpful to demonstrate the implementation, organization and functionalities of the
systems. Apart from this, it also indicates the architectures that define that as describing the
programming model and capabilities and never any specific kind of deployment.
The following study makes an investigation of various computer performances and its
architectures. It is found that the business has been eager to develop the performance of the
machines. Hence, the analysis intends to prepare a report keeping advances computer
performance and architectures in mind.
In this report, the various functions of DirectX API are demonstrated. Next, its multiple
pros and cons are examined here. Further, how the DirectX PI has been controlling the functions
of graphics are discussed. Then, a critical evaluation is done on the developments of computer
performances. This is done with pipelining architectures and MIMD.
Discussion on DirectX API:
The DirectX indicates the set of standard functions and commands that can be utilized by
software developers. It can is done while developing the programs. As many types of Windows-
based program of software can involve the DirectX command, they can be included commonly
in video games. Here, for instance, the developers can take help of the DirectX to control
external inputs, sound effects and paybacks (Hwang and Jotwani 2016). Through including the
DirectX activities to the game, here the programmers can utilize the predefined commands or
managing the sound and video of the game and user input. It has made that simpler to design the
sets and helps to have a more uniform look. This is because DirectX games have been seen to be
using many types of similar commands (Halpern, Zhu and Reddi 2016).
As far as technicality is concerned, the DirectX is also a simplified API or Application
Programming Interface (Lin et al . 2017). Furthermore, this also comprises of predefined
commands and functions. For creating the programs to use DirectX, the developers of the
LO4: Investigate advance computer architecture and performance
Executive summary
The following study deals with the investigation of various computer performances and
architectures. The analysis considers a company that is highly desperate to enhance their
computer system performances. Here, the manager is asked to generate a report over the superior
performance and architecture of computers. In this evaluation, the activities of the DirectX API
is demonstrated. This involves the benefits and drawbacks and the ways the DirectX API can
control the functions of graphics is discussed here. Next, there is a critical analysis of the ways to
develop computer performances with the help of two architectures. These are, namely Pipelining
and MIMD.
LO4 Investigating the case of advanced computer architectures along with their
performance
Introduction:
The computer performance denotes the number of tasks that have been accomplished by
any system. This indicates how well the machines must perform their duties are supposed to
show. It primarily relies on throughputs, execution time and response time of the computer
systems. On the other hand, the computer architectures denote the various set of methods and
rules. This is helpful to demonstrate the implementation, organization and functionalities of the
systems. Apart from this, it also indicates the architectures that define that as describing the
programming model and capabilities and never any specific kind of deployment.
The following study makes an investigation of various computer performances and its
architectures. It is found that the business has been eager to develop the performance of the
machines. Hence, the analysis intends to prepare a report keeping advances computer
performance and architectures in mind.
In this report, the various functions of DirectX API are demonstrated. Next, its multiple
pros and cons are examined here. Further, how the DirectX PI has been controlling the functions
of graphics are discussed. Then, a critical evaluation is done on the developments of computer
performances. This is done with pipelining architectures and MIMD.
Discussion on DirectX API:
The DirectX indicates the set of standard functions and commands that can be utilized by
software developers. It can is done while developing the programs. As many types of Windows-
based program of software can involve the DirectX command, they can be included commonly
in video games. Here, for instance, the developers can take help of the DirectX to control
external inputs, sound effects and paybacks (Hwang and Jotwani 2016). Through including the
DirectX activities to the game, here the programmers can utilize the predefined commands or
managing the sound and video of the game and user input. It has made that simpler to design the
sets and helps to have a more uniform look. This is because DirectX games have been seen to be
using many types of similar commands (Halpern, Zhu and Reddi 2016).
As far as technicality is concerned, the DirectX is also a simplified API or Application
Programming Interface (Lin et al . 2017). Furthermore, this also comprises of predefined
commands and functions. For creating the programs to use DirectX, the developers of the
27GLOBAL HARDWARE AND SOFTWARE
software ought to be using DirectX kit of software of development. This is available from
Microsoft. Nonetheless, most of the users have required just the End-User runtime of DirectX
(Kozhirbayev and Sinnott 2017). It must be installed to the computers for running the software
that is DirectX-enabled. Besides, its APA is present for Xbox video games and Windows
software.
Apart from this, it has been serving as the gap present between the underpinning
hardware and platform of device. It was also useful for the designers to access the underpinning
the various visual capabilities and hardware audio. This is through the help of APIs that are pre-
developed. Again, it can be primarily be utilized to design multimedia applications and video
games. It needs the developed automation and interaction originating from the system hardware
(Qiu et al . 2017). Further, innovation helps access the functions of I/O. However, this can be
utilized for managing and creating the components of graphical applications. Lastly, the
development of application needs the DDK or DirectX Development Kit (Hager et al . 2016).
Demonstrating the pros and cons of the DirectX API:
Regarding benefits, firstly the API has been offering developed performance than
previous. Further, it generates detailed visuals at a resolution of 1080p. This involves far more
triangles than that has been happening on the GPU. Different algorithms have been developing
visuals that are much effective. Due to this, the scenes produced are attractive and luxurious.
Further, this has also been offering support of multi-platform. Again, another significant benefit
is that here, the CPU load can be reduced dramatically (Han et al . 2018). As the applications
have been allowed to have access to the hardware at lesser levels than previous situations. Since,
it has been varying to the specialists of every device, the consumed power by the systems than to
come down through 50%. It has been done through assuring the detailed and complicated images
than it has appeared on the screen (Johnson 2018). Next, video game developers are intended to
use the new API of Microsoft. This is because the API has been currently having the ability to
utilize the CU to its greatest extent.
Nonetheless, it has not been without some drawbacks. Apart from this, the graphics cards
designed for using other APIs, the DirectX 11, for example, can never to function effectively.
Again, the system can be heated up high. Besides, another drawback is that as it generates an
image of high resolution, there are some issues with the fidelity of visual effects. Moreover, the
spaces between the lines are never filled suitably (Lan et al . 2016). Hence, there are issues with
the images. The push towards 1080p has been resulting in mages that are never that must clean
as intended. Ultimately, it must be reminded that potential users are needed to measure all its
pros and cons before undertaking ay ultimate decision to spend money over DirectX 12.
How DirectX API can control the graphics functions:
The DirectX 12 can work with advanced graphics cards. Any APU or Radeon Graphics
Card can be created cross the Graphic Core Next Architecture or AMD. Hence, the Fury and
Fury X, Radeon R300 SERIES, Radeon R2000 series, Radeon 8000 series and Radeon 7000
series can play nicely with DX12. The reason is that all the AMD graphics solution are released
from 2012. Further, the DirectX 12 of Nvidia support has been going back much more at the
back. Also, every graphics cards that are powered through the Maxwell of Nvidia, Fermi or
Kepler GPU. Hence, the GTZ or GeForce 400, 500, 600, 700 and 900 series can never work with
the innovation of DirectX 12. Furthermore, the second-generation Maxwell cards of Nvidia such
as GTX 980 Ti are the graphics cards that are announced. It helps support the feature level
software ought to be using DirectX kit of software of development. This is available from
Microsoft. Nonetheless, most of the users have required just the End-User runtime of DirectX
(Kozhirbayev and Sinnott 2017). It must be installed to the computers for running the software
that is DirectX-enabled. Besides, its APA is present for Xbox video games and Windows
software.
Apart from this, it has been serving as the gap present between the underpinning
hardware and platform of device. It was also useful for the designers to access the underpinning
the various visual capabilities and hardware audio. This is through the help of APIs that are pre-
developed. Again, it can be primarily be utilized to design multimedia applications and video
games. It needs the developed automation and interaction originating from the system hardware
(Qiu et al . 2017). Further, innovation helps access the functions of I/O. However, this can be
utilized for managing and creating the components of graphical applications. Lastly, the
development of application needs the DDK or DirectX Development Kit (Hager et al . 2016).
Demonstrating the pros and cons of the DirectX API:
Regarding benefits, firstly the API has been offering developed performance than
previous. Further, it generates detailed visuals at a resolution of 1080p. This involves far more
triangles than that has been happening on the GPU. Different algorithms have been developing
visuals that are much effective. Due to this, the scenes produced are attractive and luxurious.
Further, this has also been offering support of multi-platform. Again, another significant benefit
is that here, the CPU load can be reduced dramatically (Han et al . 2018). As the applications
have been allowed to have access to the hardware at lesser levels than previous situations. Since,
it has been varying to the specialists of every device, the consumed power by the systems than to
come down through 50%. It has been done through assuring the detailed and complicated images
than it has appeared on the screen (Johnson 2018). Next, video game developers are intended to
use the new API of Microsoft. This is because the API has been currently having the ability to
utilize the CU to its greatest extent.
Nonetheless, it has not been without some drawbacks. Apart from this, the graphics cards
designed for using other APIs, the DirectX 11, for example, can never to function effectively.
Again, the system can be heated up high. Besides, another drawback is that as it generates an
image of high resolution, there are some issues with the fidelity of visual effects. Moreover, the
spaces between the lines are never filled suitably (Lan et al . 2016). Hence, there are issues with
the images. The push towards 1080p has been resulting in mages that are never that must clean
as intended. Ultimately, it must be reminded that potential users are needed to measure all its
pros and cons before undertaking ay ultimate decision to spend money over DirectX 12.
How DirectX API can control the graphics functions:
The DirectX 12 can work with advanced graphics cards. Any APU or Radeon Graphics
Card can be created cross the Graphic Core Next Architecture or AMD. Hence, the Fury and
Fury X, Radeon R300 SERIES, Radeon R2000 series, Radeon 8000 series and Radeon 7000
series can play nicely with DX12. The reason is that all the AMD graphics solution are released
from 2012. Further, the DirectX 12 of Nvidia support has been going back much more at the
back. Also, every graphics cards that are powered through the Maxwell of Nvidia, Fermi or
Kepler GPU. Hence, the GTZ or GeForce 400, 500, 600, 700 and 900 series can never work with
the innovation of DirectX 12. Furthermore, the second-generation Maxwell cards of Nvidia such
as GTX 980 Ti are the graphics cards that are announced. It helps support the feature level
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
28GLOBAL HARDWARE AND SOFTWARE
12.1of Direct 12. It includes features such as volume tiled conservative rasterization and tiled
resources (Ripetskiy et al . 2016). However, this never indicates that other graphics cards have
never been supporting the full DirectX 12. This is because one can see some folks that claim on
social media feeds and forums. Nevertheless, it has been complicated.
Critically evaluations computer performance developments with Pipelining
architectures and MIMD:
Discussion on Pipelining architectures:
This is a technique utilized for developing the execution throughputs of CPU. This can be
used through processor resources. Here, the primary concept is to splitting processor instruction
to a series of steps. All the stages are developed for performing a specific element of the overall
instruction. Further, at the primary stage, the levels can be categorized into various units (Liu et
al . 2017).
12.1of Direct 12. It includes features such as volume tiled conservative rasterization and tiled
resources (Ripetskiy et al . 2016). However, this never indicates that other graphics cards have
never been supporting the full DirectX 12. This is because one can see some folks that claim on
social media feeds and forums. Nevertheless, it has been complicated.
Critically evaluations computer performance developments with Pipelining
architectures and MIMD:
Discussion on Pipelining architectures:
This is a technique utilized for developing the execution throughputs of CPU. This can be
used through processor resources. Here, the primary concept is to splitting processor instruction
to a series of steps. All the stages are developed for performing a specific element of the overall
instruction. Further, at the primary stage, the levels can be categorized into various units (Liu et
al . 2017).
29GLOBAL HARDWARE AND SOFTWARE
Figure 1: “Illustrating instruction pipeline”
(Source: Weithoffer, Kraft and Wehn 2017, pp. 121-126)
Fetch unit:
It helps in fetching the instruction coming from memory.
Decode unit:
It is useful to decode the instruction required to be executed.
Execute Unit:
Figure 1: “Illustrating instruction pipeline”
(Source: Weithoffer, Kraft and Wehn 2017, pp. 121-126)
Fetch unit:
It helps in fetching the instruction coming from memory.
Decode unit:
It is useful to decode the instruction required to be executed.
Execute Unit:
30GLOBAL HARDWARE AND SOFTWARE
It is useful to execute the instructions.
Write Unit:
This helps in writing the outcomes back to the memory or register.
Apart from this, there is the presence of a dedicated CPU module for every step that is
aforementioned. Over any non-pipelined CPU, as instruction is processed to any specific stage,
the new scenes remain at idle step. Here, the other levels remain at the inactive stage that is
highly inefficient. Again, on the other end, over any planned CPU, every stage work in parallel.
As the initial instruction gets decoded with the help of Decoder Unite, then the second
instruction gets fetched through Fetch Unit (Landy and Stitt 2016). This just takes 5 hours for
executing a couple of guidelines over the pipelined CPU. Then, there is a rise in the number of
measures within the pipeline. This has not always been the outcome of the increase of the
execution throughput. Over the non-pipelined CPU, the instructions have been taking three
cycles. However, over the pipelined CPU, this can consider four cycles since there are various
stages that are involved. Thus, one instruction has needed more amount of clock cycles for
executing over the pipelined CPU. Besides, the time for completing the executing of various
instructions becomes quicken under the pipelined CPUs. Hence, there is a necessity of getting
balanced taking place between that. Further, one of the essential complications with this like 31-
stage pipelining used in the processors of Intel Pentium 4, has been taking ace as the conditional
branch instruction gets executed. It is because the processor has never been able to find out the
location of the upcoming instructions. Thus, one has to wait for the instruction of the branch for
ending, and the overall pipeline has required to get flushed due to that (Tan et al . 2018). As any
program comprises of conditional instructions, the pipelining consists of adverse impacts on the
entire performances. For alleviating the issue, the breach predicts can be utilized, and this also
has an adverse effect as the branches are wrongly predicted. Here, because of various way AMD
along with Intel implements pipelining under the CPUs, the CU can be compared purely in the
basis on the speed of the clock that has never been accurate.
Understanding computer performance improvements with MIMD:
First of all, there is a shared memory model. Here, the processors are interconnected to
the worldwide available memory. This can be done through the means of software and hardware.
Here, the operating systems have been maintaining the memory coherence. Further, the
programmer’s viewpoint the memory model can be understood smartly than the distributed
memory model. Here, another benefit has been that the memory coherence can be controlled
through the operating system and never with the written program (Liu et al. 2017). Moreover, a
couple of obvious disadvantages are the scalability occurring beyond the thirty-two processors
are complicated. Further, the shared memory model has been less flexible than the model of
distributed memory. Besides, the hypercube interconnection network. Under the MIMD
distributed memory machine having the web of hypercube system interconnection, there are four
processors. This involves the memory module and processors to be placed at every vertex of the
square. Apart from this, one of the drawbacks of the system of a hypercube is that it is intended
to be configured in the two’s power. Hence, the machine should be created that can potentially
consist of more processor that that is required forthat application (Tan et al. 2018). Lastly, there
is a mesh interconnection network. Further, one of the benefits of the web of mesh
interconnection than hypercube is that the mesh system requires to be configured in two’s power.
It is useful to execute the instructions.
Write Unit:
This helps in writing the outcomes back to the memory or register.
Apart from this, there is the presence of a dedicated CPU module for every step that is
aforementioned. Over any non-pipelined CPU, as instruction is processed to any specific stage,
the new scenes remain at idle step. Here, the other levels remain at the inactive stage that is
highly inefficient. Again, on the other end, over any planned CPU, every stage work in parallel.
As the initial instruction gets decoded with the help of Decoder Unite, then the second
instruction gets fetched through Fetch Unit (Landy and Stitt 2016). This just takes 5 hours for
executing a couple of guidelines over the pipelined CPU. Then, there is a rise in the number of
measures within the pipeline. This has not always been the outcome of the increase of the
execution throughput. Over the non-pipelined CPU, the instructions have been taking three
cycles. However, over the pipelined CPU, this can consider four cycles since there are various
stages that are involved. Thus, one instruction has needed more amount of clock cycles for
executing over the pipelined CPU. Besides, the time for completing the executing of various
instructions becomes quicken under the pipelined CPUs. Hence, there is a necessity of getting
balanced taking place between that. Further, one of the essential complications with this like 31-
stage pipelining used in the processors of Intel Pentium 4, has been taking ace as the conditional
branch instruction gets executed. It is because the processor has never been able to find out the
location of the upcoming instructions. Thus, one has to wait for the instruction of the branch for
ending, and the overall pipeline has required to get flushed due to that (Tan et al . 2018). As any
program comprises of conditional instructions, the pipelining consists of adverse impacts on the
entire performances. For alleviating the issue, the breach predicts can be utilized, and this also
has an adverse effect as the branches are wrongly predicted. Here, because of various way AMD
along with Intel implements pipelining under the CPUs, the CU can be compared purely in the
basis on the speed of the clock that has never been accurate.
Understanding computer performance improvements with MIMD:
First of all, there is a shared memory model. Here, the processors are interconnected to
the worldwide available memory. This can be done through the means of software and hardware.
Here, the operating systems have been maintaining the memory coherence. Further, the
programmer’s viewpoint the memory model can be understood smartly than the distributed
memory model. Here, another benefit has been that the memory coherence can be controlled
through the operating system and never with the written program (Liu et al. 2017). Moreover, a
couple of obvious disadvantages are the scalability occurring beyond the thirty-two processors
are complicated. Further, the shared memory model has been less flexible than the model of
distributed memory. Besides, the hypercube interconnection network. Under the MIMD
distributed memory machine having the web of hypercube system interconnection, there are four
processors. This involves the memory module and processors to be placed at every vertex of the
square. Apart from this, one of the drawbacks of the system of a hypercube is that it is intended
to be configured in the two’s power. Hence, the machine should be created that can potentially
consist of more processor that that is required forthat application (Tan et al. 2018). Lastly, there
is a mesh interconnection network. Further, one of the benefits of the web of mesh
interconnection than hypercube is that the mesh system requires to be configured in two’s power.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
31GLOBAL HARDWARE AND SOFTWARE
Conclusion:
The above paper has highlighted the throughput-aware model for performance prediction
for computer applications. This one can predict the performance based on the throughput found
by the computer throughput along with memory throughputs that are redefined. It can be proved
that the models can capture the factors of GPU’s first performances. This one can deliver helpful
hints for the optimizations of future performances. The tasks consist of various limitations that
can be addressed for further research. Here, it is beneficial to model the expense of the double
precision computations and complicated investigations. Next, it is useful to figure out the upper
bound of various performances on the of the model research. In the field of computing, the
MIMD is the tool that is employed for gaining parallelism. Again, the machines which have been
using MIMD comprise of various processors function independently and asynchronously.
Moreover, at any instantaneous time, multiple processors have been executing different kinds of
instructions on distinct data pieces. Again, the pipelining can rise the entire throughput of
instruction. It brings advantages that can follow the same type for sequences to execute. Thus,
processors can convey complicated instructions. Here, all the instructions can behave distinctly
from the others that are complicated for pipeline. Hence, the processors consist of flexible
deployments having three to five stages of the pipeline. This is because as its depth rises, the
related hazards also rise.
Conclusion:
The above paper has highlighted the throughput-aware model for performance prediction
for computer applications. This one can predict the performance based on the throughput found
by the computer throughput along with memory throughputs that are redefined. It can be proved
that the models can capture the factors of GPU’s first performances. This one can deliver helpful
hints for the optimizations of future performances. The tasks consist of various limitations that
can be addressed for further research. Here, it is beneficial to model the expense of the double
precision computations and complicated investigations. Next, it is useful to figure out the upper
bound of various performances on the of the model research. In the field of computing, the
MIMD is the tool that is employed for gaining parallelism. Again, the machines which have been
using MIMD comprise of various processors function independently and asynchronously.
Moreover, at any instantaneous time, multiple processors have been executing different kinds of
instructions on distinct data pieces. Again, the pipelining can rise the entire throughput of
instruction. It brings advantages that can follow the same type for sequences to execute. Thus,
processors can convey complicated instructions. Here, all the instructions can behave distinctly
from the others that are complicated for pipeline. Hence, the processors consist of flexible
deployments having three to five stages of the pipeline. This is because as its depth rises, the
related hazards also rise.
32GLOBAL HARDWARE AND SOFTWARE
References:
Hager, G., Treibig, J., Habich, J. and Wellein, G., 2016. Exploring performance and power
properties of modern multi‐core chips via simple machine models. Concurrency and
Computation: Practice and Experience, 28(2), pp.189-210.
Halpern, M., Zhu, Y. and Reddi, V.J., 2016, March. Mobile cpu's rise to power: Quantifying the
impact of generational mobile cpu design trends on performance, energy, and user satisfaction.
In 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)
(pp. 64-76). IEEE.
Han, J., Zhang, D., Cheng, G., Liu, N. and Xu, D., 2018. Advanced deep-learning techniques for
salient and category-specific object detection: a survey. IEEE Signal Processing Magazine,
35(1), pp.84-100.
Hwang, K. and Jotwani, N., 2016. Advanced Computer Architecture, 3e. McGraw-Hill
Education.
Johansson, S. and Andersson, R., 2017. Comparison Between Particle Rendering Techniques in
DirectX 11.
Johnson, M., 2018. Implementing a Directionally Adaptive Edge AA Filter using DirectX 11.
GPU Pro 360 Guide to 3D Engine Design, p.115.
Kim, Y. and Jeong, T.S., 2018. Design of Online Action 3D Game based on DirectX.
International Information Institute (Tokyo). Information, 21(5), pp.1573-1582.
Kozhirbayev, Z. and Sinnott, R.O., 2017. A performance comparison of container-based
technologies for the cloud. Future Generation Computer Systems, 68, pp.175-182.
Lan, X., Voznyy, O., Kiani, A., García de Arquer, F.P., Abbas, A.S., Kim, G.H., Liu, M., Yang,
Z., Walters, G., Xu, J. and Yuan, M., 2016. Passivation using molecular halides increases
quantum dot solar cell performance. Advanced Materials, 28(2), pp.299-304.
Landy, A. and Stitt, G., 2016, February. Doubling FPGA Throughput via a Soft SerDes
Architecture for Full-Bandwidth Serial Pipelining. In Proceedings of the 2016 ACM/SIGDA
International Symposium on Field-Programmable Gate Arrays (pp. 282-282). ACM.
Lin, J., Yu, W., Zhang, N., Yang, X., Zhang, H. and Zhao, W., 2017. A survey on internet of
things: Architecture, enabling technologies, security and privacy, and applications. IEEE
Internet of Things Journal, 4(5), pp.1125-1142.
Liu, G., Tan, M., Dai, S., Zhao, R. and Zhang, Z., 2017. Architecture and Synthesis for Area-
Efficient Pipelining of Irregular Loop Nests. IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, 36(11), pp.1817-1830.
Qiu, T., Chen, N., Li, K., Qiao, D. and Fu, Z., 2017. Heterogeneous ad hoc networks:
Architectures, advances and challenges. Ad Hoc Networks, 55, pp.143-152.
Ripetskiy, A.V., Zelenov, S.V., Vučinić, D., Rabinskiy, L.N. and Kuznetsova, E.L., 2016.
Automatic errors correction method based of the layer-by-layer product representation which
parallel algorithms are developed for multiprocessor computer hardware. International Journal
of Pure and Applied Mathematics, 111(2), pp.343-355.
Tan, X., Ye, X.C., Shen, X.W., Xu, Y.C., Wang, D., Zhang, L., Li, W.M., Fan, D.R. and Tang,
Z.M., 2018. A pipelining loop optimization method for dataflow architecture. Journal of
Computer Science and Technology, 33(1), pp.116-130.
Weithoffer, S., Kraft, K. and Wehn, N., 2017, September. Bit-level pipelining for highly parallel
turbo-code decoders: A critical assessment. In 2017 IEEE AFRICON (pp. 121-126). IEEE.
References:
Hager, G., Treibig, J., Habich, J. and Wellein, G., 2016. Exploring performance and power
properties of modern multi‐core chips via simple machine models. Concurrency and
Computation: Practice and Experience, 28(2), pp.189-210.
Halpern, M., Zhu, Y. and Reddi, V.J., 2016, March. Mobile cpu's rise to power: Quantifying the
impact of generational mobile cpu design trends on performance, energy, and user satisfaction.
In 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)
(pp. 64-76). IEEE.
Han, J., Zhang, D., Cheng, G., Liu, N. and Xu, D., 2018. Advanced deep-learning techniques for
salient and category-specific object detection: a survey. IEEE Signal Processing Magazine,
35(1), pp.84-100.
Hwang, K. and Jotwani, N., 2016. Advanced Computer Architecture, 3e. McGraw-Hill
Education.
Johansson, S. and Andersson, R., 2017. Comparison Between Particle Rendering Techniques in
DirectX 11.
Johnson, M., 2018. Implementing a Directionally Adaptive Edge AA Filter using DirectX 11.
GPU Pro 360 Guide to 3D Engine Design, p.115.
Kim, Y. and Jeong, T.S., 2018. Design of Online Action 3D Game based on DirectX.
International Information Institute (Tokyo). Information, 21(5), pp.1573-1582.
Kozhirbayev, Z. and Sinnott, R.O., 2017. A performance comparison of container-based
technologies for the cloud. Future Generation Computer Systems, 68, pp.175-182.
Lan, X., Voznyy, O., Kiani, A., García de Arquer, F.P., Abbas, A.S., Kim, G.H., Liu, M., Yang,
Z., Walters, G., Xu, J. and Yuan, M., 2016. Passivation using molecular halides increases
quantum dot solar cell performance. Advanced Materials, 28(2), pp.299-304.
Landy, A. and Stitt, G., 2016, February. Doubling FPGA Throughput via a Soft SerDes
Architecture for Full-Bandwidth Serial Pipelining. In Proceedings of the 2016 ACM/SIGDA
International Symposium on Field-Programmable Gate Arrays (pp. 282-282). ACM.
Lin, J., Yu, W., Zhang, N., Yang, X., Zhang, H. and Zhao, W., 2017. A survey on internet of
things: Architecture, enabling technologies, security and privacy, and applications. IEEE
Internet of Things Journal, 4(5), pp.1125-1142.
Liu, G., Tan, M., Dai, S., Zhao, R. and Zhang, Z., 2017. Architecture and Synthesis for Area-
Efficient Pipelining of Irregular Loop Nests. IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, 36(11), pp.1817-1830.
Qiu, T., Chen, N., Li, K., Qiao, D. and Fu, Z., 2017. Heterogeneous ad hoc networks:
Architectures, advances and challenges. Ad Hoc Networks, 55, pp.143-152.
Ripetskiy, A.V., Zelenov, S.V., Vučinić, D., Rabinskiy, L.N. and Kuznetsova, E.L., 2016.
Automatic errors correction method based of the layer-by-layer product representation which
parallel algorithms are developed for multiprocessor computer hardware. International Journal
of Pure and Applied Mathematics, 111(2), pp.343-355.
Tan, X., Ye, X.C., Shen, X.W., Xu, Y.C., Wang, D., Zhang, L., Li, W.M., Fan, D.R. and Tang,
Z.M., 2018. A pipelining loop optimization method for dataflow architecture. Journal of
Computer Science and Technology, 33(1), pp.116-130.
Weithoffer, S., Kraft, K. and Wehn, N., 2017, September. Bit-level pipelining for highly parallel
turbo-code decoders: A critical assessment. In 2017 IEEE AFRICON (pp. 121-126). IEEE.
1 out of 33
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.