Journal Paper and Implementation of Concurrency and Banking System

Verified

Added on  2023/06/09

|10
|3674
|252
Journal and Reflective Writing
AI Summary
This journal paper delves into the critical aspects of concurrency control within banking systems, particularly in the context of temporal databases and distributed transactions. The paper explores the challenges posed by multi-user environments and the need for efficient concurrency control mechanisms to ensure database reliability and data consistency. It proposes an algorithm that integrates both timestamp and locking approaches, designed for temporal databases where multiple users access a shared data source. The research encompasses the design, execution, experimental study, and performance analysis of the algorithm, implemented using an Oracle 12c trigger. The study also discusses the evolution of database technologies, including the shift towards NoSQL databases and the application of transaction memory to enhance synchronization. The paper further examines existing concurrency control approaches like optimistic and pessimistic locking, and how they impact performance and scalability. The paper concludes by discussing the role of Oracle 12c and its enterprise manager in providing graphical representations of concurrency and locking scenarios, which are crucial for ensuring database consistency and reliability.
Document Page
1
JOURNAL PAPER AND IMPLEMENTATION OF
CONCURRENCY AND BANKING
Student’s Name
University
Course
Date
ABSTRACT
Performance of database system is generally affected by key factors such as concurrency control
mechanism among others. This is a common occurrence in settings or environments where there
is an existence of multiuser distributed transaction. Such situations are not ideal for time-
sensitive as well as databases applications that are time sensitive and with large capacity. Thus,
proper management and monitoring of database system for efficiency and dependability
purposes. A temporal database is basically used when one wants to factor in the time aspect of
data. Through temporal validity support, time aspect can be linked to data transmitted or
received. The aforementioned control mechanism goes a notch higher when executed on a
temporal database within an environment of a multiuser distributed transaction. Hence calls for
distinctive treatment. In order to ensure database reliability within the distributed transactions,
the varying end users sessions ought to bring about not only meaningful but reliable outcomes.
Also, the sessions must be carried out concurrently or else the whole database will be considered
inconsistent. The concurrency issue was originally addressed using approaches such as
timestamp and locking based approach. This research proposed an efficient concurrency
algorithm because it factors in both timestamp and locking approaches which are ideal in
addressing concurrency problems. It's ideal in a temporal database where a number of users are
attempting to access data from a common source. The research scope will entail designing,
execution, experimental study as well as analysis of algorithm performance in relation to the
existing positive and negative concurrency control mechanism. A trigger of Oracle 12c is used in
the execution of the algorithm proposal. Thus, ensures the reliability of the temporal database
through an efficient locking mechanism and temporal validity support. Oracle 12c enterprise
manager ensures both concurrency and locking cases are well represented graphically in the
experiment carried out.
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
2
1. INTRODUCTION
Concurrency controls (CC) are majorly used for purposes of ensuring the reliability of
database systems. When distributed transactions are done concurrently, concurrency must yield
similar outcomes as an execution done sequentially. An execution is said to be serializable when
its computation reflects a serial execution. When carrying out a serial execution of more than one
transaction, all operations of individual transactions will be carried out before moving to the next
transaction. This means there will be no conflicting situations. Serializable execution brings
about the consistency of the database systems. The concurrency control which is primarily used
in historical data management moves to another level when used on the temporal database.
Basically, applications that are time reliant or real-time sensitive are in nature temporal.
They are either categorized as time-referenced or time-relevant data. For instance stock exchange
and portfolio management for financial applications, airline and hotel reservations within
scheduling applications. The aforementioned applications fall within the two categories. The
currently existing concurrency control mechanism for database system includes optimistic and
pessimistic approaches. For an optimistic approach, in spite of conflicting situation concurrent
transaction is allowed to go on with a risk of starting again. While for pessimist approach the
transaction is terminated in the event of conflict. In order to ensure consistency of the database
system, locking offers an efficient concurrency control. It simply provides concurrency control
mechanism locks on data accessibility. Access to the data item is granted instantaneously a lock
is attained in a transaction.
When there is a detection of a probable conflict in a transaction, the pessimistic
concurrency control will take avoidance measure thus bringing a halt to the transaction.
Conversely, an optimistic concurrency will still allow the transactions to take place even when
there is foreseeable conflict. In the event that the conflict happens then, the transactions will be
started again. The focus is basically to ensure resources are not blocked for longer time intervals.
On the other hand, the pessimistic approach has a lot of shortcomings in terms of deadlocks and
numerous lockouts.
The optimistic locking provides an alternative solution to the problems. Optimistic
locking does not lock records when they are read and proceeds on the assumption that the data
being updated has not changed since the reading. Since no locks are taken out during the read,
Document Page
3
the deadlocks are eliminated since users should never have to wait on each other's locks. The
Oracle database uses optimistic locking by default.
However in an experimental study when the optimistic locking approach for temporal
database environment was checked for efficiency and performance it was not up to the mark as
per the requirement of temporal database systems and needed improvement.
The historical data can be represented in a systematic manner using the temporal
database. The temporal database provides mechanisms to store and manipulate time-varying
information.
Temporal databases encompass all database applications that require some aspect of time
when organizing their information. So consistency in the temporal database is a critical area
needs to be addressed by the database administrator. Oracle introduced Oracle Database 12c on
June 25, 2013, which is considered to be the important architectural transformation in the legacy
of the world's leading database in its 25 years with respect to the market presence and
dominance.
Oracle 12c supports temporal database consistency through temporal validity support and
efficient locking mechanism. Oracle enterprise manager of Oracle 12c provides a graphical view
of distributed transaction and various user sessions with locking and unlocking details
1. BACKGROUND
In the past two decades, researchers have concentrated on temporal data also
known as time referenced with the intention of developing concepts, techniques as well
as tools which better suits the management of the temporal data. The most recent research
which is based on observations ended up realizing that majority of the time-centric
databases that are real-time applications have temporal data. The conventional
technology of databases in the banking system lacks adequate support to those databases
specifically when it comes to concurrency issues. The system of temporal databases has a
difference when compared to conventional databases when it comes to data storage. The
difference is arrived at in the representation of data validity in the databases.
Concurrency control is a crucial aspect of a banking system database. Majority of
the researchers have tried to develop a variety of protocol with the aim of achieving
serializability. The various approaches that are used by these protocols include a
timestamp, locking, and versions that are multiple. Majority of the concurrency control
Document Page
4
schemes that are applied in the banking sector use serializability, which is a common
concept. The conflicting processes and functions are usually resolved in these systems
through aborting or delaying the processes and the transactions. The locking protocols,
techniques of timestamp validation, timestamp themselves and the various versions
available are used as the only concurrency control schemes.
The proposals that have been brought on board regarding concurrency control in
the databases of a banking system have come up with various classes of concurrency
control approaches. There has been a brief survey which aims at designing an algorithm
of concurrency control which is more flexible compared to conventional ones.
2. PROBLEM DESCRIPTION
The development of cloud computing has contributed to issues related to data-intensive
services. Cloud computing is termed as an architecture that will be able to accommodate data-
intensive and large-scale software (Pokorny 69). NoSQL is the database that can be used in
cloud computing architecture to provide a better solution. The need for machines to scale out
and enhance the diversity of data retrieval patterns initiated the development of NoSQL
databases. According to the available literature, many enterprises are using NoSQL database
for data storage. Relational databases developed for structured data and scale-up systems were
not effective. Consistency and parallelism of operations are solved through implementation of
NoSQL databases. Research indicates that different data types are necessitating enterprises to
invest and shift to big data technologies such as NoSQL (Leavitt 13). The NoSQL is believed
to provide enhanced scalability, flexibility, and functionality. NoSQL increases performance
by allowing many devices to be included in a group. The devices are linked in a distributed
form hence the improvement in performance and scalability. The capability to distribute data
to various devices is a key aspect of NoSQL databases.
3. SOLUTION
Document store lacks a predefined schema, hence a complex type of NoSQL
database. Documents are stored and accessed in the document store database in forms of
BSON, JSON and XML formats. Documents in the database are represented by the
specific key. Also, data retrieval in the database is accomplished by the use of a query
language or an API. Advantages of the document store include intuitive data structures,
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
5
flexible schema, and applicability in real-time analytics (Katkar 17). The disadvantages
are increased hardware demand, dynamic aggregate design, and redundancy storage.
Column family is a type of NoSQL database that store data in collections of
columns. Logically clustered columns make up column families that may consist of
several columns developed at runtime. A single family column is termed as a map of
data. A two-level aggregate arrangement is applied in this kind of database (Kumar et al.
30). Column family can be categorized into Facebook’s, HBase, Yahoo’s PNUTS, and
Cassandra. Advantages of column family include distribution, high performance, and
enhanced efficiency. The disadvantages are limited query options, incompatibility with
early prototypes, and high maintenance effort.
Graph databases entail storage of entities and entity associations. In the graph
database, data is stored once but interpreted in diverse ways based on the available
relationships. The organization of database influences the above characteristic.
Intelligence can be added to a relationship using its specific properties. The graph
database best suits inter-linked data such as maps and social networking locations
(Kumar et al. 30). Some of the merits include high-performance efficiency, applicability
in social networking, and close modeling of networked records. The demerits of graph
database are uneven updating, incapability to handle some large volumes of data, and
difficulties in data sharing.
4. DISCUSSION AND EVALUATION
Transaction memory is a type of data structure that is locks free and possesses a
free mutual exclusion. The state means that in the case of disruption of one process, other
processes are not affected (Mahr et al. 39). The transaction memory is a new
multiprocessor design architecture. The target of transactional memory is to enhance
Lock-Free synchronization. Programmers are allowed by the transaction memory to
define a customized read-write operation that applies to many independent words of the
memory. Among the best-known lock based technique, deadlocks and convoys,
transactional memory is well known.
Transactional memory simply uses many instructions that are executed by a single
process satisfying the property of Serializability and Atomicity (Mankin et al. 90).
Document Page
6
Serializability involves a step of one transaction, which does not interleave another step.
Atomicity outlines that, if a process is initiated, it must terminate.
Transaction memory provides the following ways for accessing the memory (Leis
et.al 580). Load transaction: The value of the shared memory is read and recorded in a
private register. Load-Transaction-Exclusive: It reads the value of shared memory
location into a private register, and a location is probably to be changed. Store
transaction: it writes a value from the private register to a shared memory location.
The transaction memory manipulates the transaction state through the usage of the
following instructions. Commit: This state tries to make the processes’ tentative changes
permanent. After a disruption occurs, the unterminated processes are rolled back. Commit
returns an indication of either success or failure (Martin, et al. 17). Abort state discards
all the updates, which were already written. The validate state tests the current processes
and returns true in case the current transaction is not aborted. Finally, the process returns
false when the current transaction is aborted.
NoSQL database is implemented in distributed systems to facilitate sharing of
data and information. Transactions in the NoSQL database are incorporated using parallel
programming. In distributed systems, concurrent activities utilize information or data at
the same time (Schoeberl and Hilber 279). It is important to ensure efficiency and sharing
for the effective functioning of the processes. NoSQL database systems necessitate for
consistency of data and accomplishment of all transactions. Transactional memory
enables the database to manage all the processes taking place and eliminate issues such as
deadlocks. Transactions included in the NoSQL database need to meet three
characteristics. Consistency is the key feature that must be met to ensure that transactions
are accurately accomplished (Schoeberl and Hilber 279). Moreover, atomicity and
isolation must be achieved for proper utilization of the NoSQL database. The
transactional memory plays a major role in the achievement of the three features in all
transactions.
Parallel hardware has recently been exploited by distributed systems like NoSQL
database systems. Enhancing performance is the key aspect of utilizing distributed
systems. Transactional memory enables NoSQL database systems to execute several
queries or transactions at the same (Vizzotto et al. 180). When using NoSQL databases, a
Document Page
7
transactional memory acts like a shared memory. It, therefore, provides the running
transactions with relevant resources for their execution. Many queries are able to be
executed simultaneously because the transactional memory coordinates the way
transactions are added into the system. The strategy for implementing the transactions is
based on the availability of the required resources. Moreover, parallelism is achieved in
the database systems with the help of transactional memory. Several processes are
allowed to make use of the same resources without any issues (Larus et al. 80). The
transactional memory manages the resource sharing as if only one transaction was
utilizing the shared resource. Although the NoSQL databases may be accessed by many
transactions, transactional memory facilitates sharing for effective execution of all
computations. On implementing distributed systems, the employment of parallelism leads
to the same results as though the processes had been executed one at a time
(serialization). In summary, transactional memory enables concurrent processes to use the
same database and acquire the same outcomes.
In distributed systems, the key storage memory that aid execution is the
transactional memory. Multitasking and error recovery is enabled by the use of
transactional memory. Moreover, conflicts occur if transactions depend on a shared
memory (transactional memory). The transactional memory controls how transactions
are added into the NoSQL database to eliminate conflicts. Also, the transactional memory
system detects and determines the conflicts that take place and resolves it (Martin et al.
17). When data is updated or modified in the database, the transactional memory avails it
for use by the other transactions. Instant updates are facilitated by the use of the
transactional memory hence promoting data consistency. In this case, updates may not
be made until a certain transaction is accomplished for consistency. Processes that
depend on one another must share their result for the effective and accurate execution of
computations in the database.
The transactional memory includes a transactional descriptor that acquires
information about all transactions taking place in the NoSQL database system. The
memory evaluates the state of each process to ensure that it does not execute before
relevant updates are made. Transactions that cause conflicts are eliminated or stopped to
avoid the occurrence of the anticipated conflict (Mahr et al. 39). The addition of a new
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
8
transaction is influenced by the state of the current transactions. For instance, if adding a
new transaction causes a conflict, the transactional memory aborts the addition. In
summary, transactional memory helps in elimination of conflicts by controlling the way
transactions are added in the database system.
Transitional System gives the concept of a transaction within a NoSQL database.
During a transaction, there is a sequence of operation that satisfies the ACID property.
The property includes the Atomicity. Atomicity refers to a situation whereby a
transaction is established. Either all the operations are performed or none is performed
(Vizzotto et al 184). Consistency is whereby a system is taken from a single stable state
to another stable state. Isolation property entails executing a transaction, which
concurrently follows the semantic that define consistency or isolation. Finally, an
operation should follow durability property whereby in case a transaction has finished,
the terminated process should remain durable in case there is an encountered fault. After
adding a transaction to a NoSQL database, the transaction should complete successfully
(Commit) otherwise, it should abort.
5. CONCLUSIONS
The transactional system allows concurrent transactions which lead to the access
and modification of data in a concurrent way (Sonmez 146). The fault tolerance in
NoSQL database is achieved by replication of regions in different servers. The previous
approach of NoSQL system uses a consistent update mechanism where it allows a
different replica to accept updates. The data replication and data replacement are the
common strategies used to add an operation in NoSQL databases. The Middleware layer
that occurs between the client and the server leads to an introduction of transactional
guarantee (Saha 185). The NoSQL database extends clients interface by issuing
commands that end and start a transaction. Once the client establishes a transaction, an
operation sequence is carried out that lines with NoSQL API but lies in a transactional
context. The NoSQL uses key-value stores that enable storage of value that is entitled to
be retrieved my keys. The system van holds both the structured and unstructured data.
Hence, by incorporating a Transactional memory to NoSQL databases, it ensures
transactions are processed faster or within the minimum time.
Document Page
9
References
Katkar, M. "Performance Analysis for NoSQL and SQL." International Journal of Innovative
and Emerging Research in Engineering 2.3 (2015): 12-17.
Kumar, Rakesh, et al. "Apache Hadoop, NoSQL and NewSQL Solutions of Big Data."
International Journal of Advance Foundation and Research in Science & Engineering
(IJAFRSE) 1.6: 28-36.
Larus, James, and Christos Kozyrakis. "Transactional Memory." Communications of the ACM
51.7 (2008): 80-8. EBSCOhost; asf. Web.
Leavitt, N. Will NoSQL Databases Live Up to their Promise?. 43 Vol., 2010: 12-14.
Mahr, Philipp, Alexander Heine, and Christophe Bobda. "On-Chip Transactional Memory
System for FPGAs using TCC Model." Proceedings of the 6th FPGAworld Conference (2009):
39. EBSCOhost; edb. Web.
Mankin, Jennifer, David Kaeli, and John Ardini. "Software Transactional Memory for Multicore
Embedded Systems." Languages, Compilers, Tools & Theory for Embedded Systems (2009): 90.
Martin, Milo, Colin Blundell, and E. Lewis. "Subtleties of Transactional Memory Atomicity
Semantics." IEEE Computer Architecture Letters 5.2 (2006): 17.
Pokorny, Jaroslav. "NoSQL Databases: A Step to Database Scalability in Web Environment."
International Journal of Web Information Systems 9.1 (2013): 69.
Saha, B., A. Adl-Tabatabai, and Q. Jacobson. "Architectural Support for Software Transactional
Memory." 2006 39th Annual IEEE/ACM International Symposium on Microarchitecture
(MICRO'06) (2006): 185. EBSCOhost; edb. Web.
Schoeberl, M., and P. Hilber. "Design and Implementation of Real-Time Transactional
Memory." 2010 International Conference on Field Programmable Logic & Applications (FPL)
(2010): 279.
Sonmez, N. [et al ]. From Plasma to Beefarm: Design Experience of an FPGA-Based Multicore
Prototype. Springer VerlagOAIster; OCLC; EBSCOhost; edsoai. Web.
Sonmez, N., et al. "TMbox: A Flexible and Reconfigurable 16-Core Hybrid Transactional
Memory System." 2011 IEEE 19th Annual International Symposium on Field-Programmable
Custom Computing Machines (FCCM) (2011): 146. EBSCOhost; edb. Web.
Document Page
10
Vizzotto, Juliana Kaizer, and André Rauber Du Bois. "Modelling Parallel Quantum
Computing using Transactional Memory." Electronic Notes in Theoretical Computer Science
270 (2011): 183-90. EBSCOhost; edselp. Web.
chevron_up_icon
1 out of 10
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]