Print ISSN: 1991-8941

Online ISSN: 2706-6703

Volume 2, Issue 1

Volume 2, Issue 1, Winter and Spring 2008, Page 1-194


Sufyan T. Faraj

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 1-11
DOI: 10.37652/juaps.2008.15412

SSL/TLS is the protocol that is used for the vast majority of secure transactions over the Internet. However, this protocol needs to be extended in order to create a promising platform for the integration of quantum cryptography (QC) into the Internet infrastructure. This paper presents a novel extension of SSL/TLS that significantly facilitates such type of integration. This extended version of SSL/TLS is called QSSL (Quantum SSL). During the development of QSSL, a concentration has been made on the creation of a simple, efficient, general, and flexible architecture that enables the deployment of practical quantum cryptographic-based security applications. Indeed, QSSL efficiently supports unconditionally secure encryption (one-time pad) and/or unconditionally secure authentication (based on universal hashing). A simplified version of QSSL based on BB84 (Bennett-Brassard 84) quantum key distribution (QKD) protocol has been implemented and experimentally tested. This has enabled us to experimentally assess our protocol design based on software simulation of the quantum channel events used for QKD.


Sinan G. Abid Ali; Nidhal E. Berbat; Sufyan T. Faraj

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 12-23
DOI: 10.37652/juaps.2008.15423

The objective of this work is to build a number of different secure applets for Java smart card were each applet is designed for a specific task. Three packages were designed; the first package is the “Secure Wallet” which represents the electronic money storage card for financial services such as banks. The second package is “Card Connection”. This package was designed to be used in prepaid communication applications such as telephone, Internet, etc. The third package is “Health Care”, which represents the medical file for the card carrier. It is used in hospital, clinic and medical establishments. These applets were simulated by development kit for the Java card platform. Each applet was compiled, converted, verified, and installed successfully using the development kit tools. During the installation step, a script file was produced which contains Application Protocol Data Unit (APDU) commands. Each APDU command was processed and the result of processing was saved to a log file that represents both the command and the response APDU. Both the inputs and the results were in hexadecimal.


Samyia S. Lazar; Azmi T. Huseen; Loaye A. Goerge

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 24-42
DOI: 10.37652/juaps.2008.15464

ABSTRACT:The technique of image compression means reduce the size of image data in a way that can be reconstructed to its necessary components. This technique convert image data to an effective code, then this code can be decoded to reconstruct the approximate image. In this paper we implement image compression using wavelet transform. After loading image data, it will divide into three-color components (red, green and blue). Where each component processed independently. Each component transformed using wavelet transform to subbands (low subband and high subbands). Where the number of subbands depends on the number of transform passes. After completing the transformation process, the coding is divided into two necessary parts: First: Process the coefficients of low subband using Differential Pulse Code Modulation (DPCM) to reduce the size of coefficients. Then these coefficients coded using S-Shift coder. Second: Process the other subbands by dividing each subband in to eight bit slices. Then these bit sliced are coded by using chain encoder. After completing the coding processes, the compressed data stored into file. Then these compressed data are decompressed using algorithms similar to that are used in compressed system but with inverse order. The compression ratio in this work is up to 57 with accepted level or error and quality.


Warqaa Y.Ibraheem; Sahar Kh. Ahmed; Nada N. Saleem

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 43-55
DOI: 10.37652/juaps.2008.15430

In the present research algorithms employing fuzzy logic on median and mean filters for improving impulse noise removal performance for image processing have been developed. These algorithms can achieve significantly better image quality and capable of preserving the intricate details of the image than classical arithmetic and mean filters when the images are corrupted by impulse noise.The proposed fuzzy image filters (Filter1, Filter2 and Filter3) are based on a combination of fuzzy impulse detection and restoration of corrupted pixels. Fuzzy knowledge base required for detection of impulses.The research also presents an adaptive fuzzy filter system (filter 4) for noisy image enhancement combining smoothing and sharpening. The method is automatically obtaining an optimum parameter value adaptively by evaluating the local features. We present the results for different levels of impulse noise corruption on several real images, and the performance of our proposed filters is compared with statistical noise removal methods to show the effectiveness of the proposed techniques.


Ali M. Sagheer; Abdulrahman D. Khalaf

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 56-65
DOI: 10.37652/juaps.2008.15479

This paper introduces a system, which can be applied for monitoring of the speed of moving objects using a single camera. The Monitoring System is developed to monitor one moving object at a time with its speed being monitoring using a sequence of video frames. Field test has been conducted to capture real-life data and the processed results are presented. Multiple moving objects and noisy data problems are considered.The proposed system depends on evaluating the position and the orientation of moving objects in real world according to suitable reference point, on the screen, which can be selected by the user (static object).

Scintillation on non Standard Atmosphere

Ahmed N. Rasheed

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 66-72
DOI: 10.37652/juaps.2008.15506

There are several methods to measure the magnitude of scintillation. Most of which their equations do not include meteorological element .In meantime we can not measure the magnitude of scintillation with elevation angle 5°-10° .A prediction method is suggested to measure tropospheric scintillation on earth –space path. This method avoids all the problems in their methods.It would apply this method in Basrah atmosphere in the case of non-standard atmosphere and we studied the effect of meteorological conditions, frequency, antenna diameter, elevation angle and altitude above sea level, on the magnitude of scintillation


Samara A. Elia; Nasser N. Khamiss Alani

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 73-82
DOI: 10.37652/juaps.2008.15425

:This paper is developed to study the compression performance of video by using searching method of motion estimation OTS and SNR scalable coding to enhance the quality of video sample. The work evaluates set of suitable objectives fidelity measures, such as MSE, PSNR and CR. The model of video coding system is designed to treat the video signal as a CBR and implemented for different video samples rates. The two major components inter and intra frames compression is achieved as an optimal compensation for both quality and CBR. In the same time the basic structure of communication networks which is represented by the Transmission Control Protocol/Internet Protocol (TCP/IP) model is taken into consideration within the system BER control. The developed system is implemented using Visual Basic Language (ver 6.0) under Windows Xp operation systems


Ghaida A. AL-Suhail

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 83-93
DOI: 10.37652/juaps.2008.15451

In this paper, new analysis and performance of robust error-model are presented for MPEG-4 video stream over wireless point-to-point network. Analytical expressions assume a noisy wireless environment causing frequent and random bit errors associated with packets. By this model, the temporal video scalability can be evaluated under TCP-Friendly Rate Control (TFRC) transmission when the Bose-Chaudhuri-Hochquenghem (BCH) channel coding is employed as a forward-error- correction (FEC) at a radio link layer. A FEC provides an efficient throughput access on wireless network. The numerical results clearly indicate that a quality of service (QoS) can be improved at low channel SNR region when the maximum channel coding throughput is achieved.


Muntaser Abdul-Wahed Salman

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 94-105
DOI: 10.37652/juaps.2008.15445

Two types of neural networks learning algorithms were created, trained, tested, and evaluated in an effort to find the appropriate neural network training method for use in numeral recognition problem. The purpose of this study was to compare the training speeds of two neural networks Backpropagation learning algorithms (Adaptive learning rate and Resilient) when exposed to ten number recognition data sets. Each algorithm was trained using ten data sets as a basic set (Boolean value), and a complex (noisy) set. The trials conducted indicated a significant difference between the two algorithms in the basic data set, with the Resilient training algorithm the neural network trained faster.The creation, training, and testing of each neural network was done using the MathWorks software package MATLAB which contains a “Neural Network Toolbox” that facilitates rapid creation, training, and testing of neural networks. MATLAB was chosen to use for learning algorithm development because this toolbox would save an enormous amount programming effort.


Ali M. Sagheer

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 106-115
DOI: 10.37652/juaps.2008.15446

In this paper we use the elliptic curves system in the Public Key Traitor Tracing Scheme. The Elliptic Curve points form Abelian group that used in the Public Key Traitor Tracing Scheme. The main advantage of elliptic curves systems is thus their high cryptographic strength relative to the size of the key. We design and implement an elliptic curves public key encryption scheme, in which there is one public encryption key, but many private decryption keys which are distribute through a broadcast channel, the security of the elliptic curves public key encryption scheme based on the Elliptic Curves Decisional Diffie Hellman(ECDDH) problem that is analogous to Decisional Diffie Hellman(DDH) problem, but it is more intractable than DDH problem.


Rabah N. Farhan

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 116-123
DOI: 10.37652/juaps.2008.15519

Iris recognition system consists of a sequence of stages, each stage perform specific functions on the captured image of the human iris. The first stage is automatic segmentation of iris from the eye image. In this paper, new proposed algorithm designed and implemented using Genetic based Hough Transform algorithm (GACHT) to implement the first stage of iris recognition system with the developed Genetic Algorithm. GACHT was used to detect the Iris and Pupil segments which in turn is a outer and inner circles, the accuracy of any Iris recognition system should be depend on the right region of the Iris, which is done in this paper using and evolutionary algorithm called GACHT algorithm


Abdullah.M.Awad Al-Fahdawi

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 124-130
DOI: 10.37652/juaps.2008.15443

This paper describes a new lossy image compression decompression algorithm. In lossy compression techniques there are some loss of information, and image cannot be reconstructed exactly.This algorithm will be referred to as (IWDC), which stands for integer wavelet (IWT) and discrete cosine transform (DCT) and this algorithm improves existing techniques and develops new image compressors.(IWDC) is efficient than corresponding DCT and wavelet transform functions and incorporating DCT and integer wavelet transform are shown to improve the performance of the DCT and integer wavelet (IWT). In the new proposed compression is more efficient than the still image compression methods.


Salih M. Salih

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 131-138
DOI: 10.37652/juaps.2008.15411

In this paper, a proposed model based on In-Place Wavelet Transform (IP-WT) was suggested to improve the performance of the Orthogonal Frequency Division Multiplexing (OFDM) under the Additive White Gaussian Noise (AWGN), and flat fading channel. The proposed model does not require additional arrays at each sweep such as in the ordered Haar wavelet transform; this ensures fast processing time with minimum memory size. The results extracted by a computer simulation and compared with the performance of the conventional model based on Fast Fourier Transform (FFT). As a result, it can be seen that the proposed technique has high performance improvement over the conventional OFDM system based FFT, where the Bit Error Rate (BER) is widely reduced under these models of channels.


Yusra Mahmood Humadi

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 139-144
DOI: 10.37652/juaps.2008.15527

Quantum Computation is a new field in science. It relates to quantum physics and computer science. The concept of quantum computers regarded new theoretical stage due to the fact that working and real quantum computers not exist and in development stage at the moment. Quantum computers are completely different from the classical computers. Quantum computer building blocks are several quantum objects such as atoms, ions or photons, interpreting their states, or polarizations as ones, zeros, and superposition’s of ones and zeros. Each quantum bit or qubit can exist in a superposition of states with different probabilities of seeing a one or a zero when one observes the qubit. In this paper, a Probabilistic Quantum Computer Simulator (PQCS) was build to be used as interesting tools to study the various quantum algorithms that intended to be executed on the real quantum computer. The toolbox contains the basic Quantum Units such as NOR Gates, Quabit Toffoli Gate and Quabit Pauli Z that represented in Hilbert Space (HS). The designer of quantum algorithm uses the Drag and Drop facility to accomplish the overall design. The proposed system implemented in Visual Basic and has interactive Graphical User Interface (GUI) that offer flexible design tools for building quantum algorithms in various fields.


Essam T. Yassen

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 145-158
DOI: 10.37652/juaps.2008.15432

: In last decades the applications of the computerized system were widely used in various environments, such real time systems, monitoring system and other. These applications need live answer from the programmable system. The compiler phases represent the heart of any programming language, therefore if we enhance the compilers; we make the execution more efficient.In this paper we present new model for SLR-Parser, which is the main stage of the compiler phases, because it responsible for the grammatical checking of the program statements and it needs more time than other stages. The new model appears faster than original parse. Also it is less complexity than original parser. Therefore, it is more efficient to use.


Ali M.Al-Bermani

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 159-168
DOI: 10.37652/juaps.2008.15476

:Multirate filter is one of the main parts that determining the receiveing quality in wireless communication. Wireless applications including ETSI DVB-T/H digital terrestrial television transmission and IEEE network standards such as 802.11 (“WiFi”), 802.16 (“WiMAX”) have high quality data acquisition and storage system requirements which increasingly take advantage using multirate techniques to avoid the use of expensive anti-aliasing analogue filters and to handle efficiently signal of different bandwidths which require different sampling frequencies. So, the present work deals with the design and implementation of multistage distributed arithmetic FIR filter with efficient cost of multiplication and storage requirement. Previous work concerning the implementation of filter is either using special programmable devices or DSP processors. Some of these works used the FPGA based architectures to implement filter in single stage but with high cost and complex design to implement. The designed arrangements are simulated and implemented using VHDL based software on Virtex-II FPGA chip. High signal resolution and large dynamic range are the main features achieved in the work.


Murtadha M. Hamad

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 169-178
DOI: 10.37652/juaps.2008.15456

Bitmap structure indexes are usually used in database environments which have large amount of data. Bitmap reduces response time and storage requirements for large database compared to other data structures like B- tree. In this research , the Bitmap structure is studied and analyzed using Visual Foxpro-8 Software. .The empirical study proved the efficiency of this structure for compressing keys.A Comparison between this structure and B-tree was done as an example to explain more advantages for Bitmap structure


Imad H.A. AL-Iathary; Murtadhah M.H

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 179-185
DOI: 10.37652/juaps.2008.15442

Cluster analysis techniques are widely used in medical researches. Clustering techniques are not unique and hence users must be extremely conscious about what to use in order to analyze their data. The choice of unsuitable technique will resulted directly in a misleading output that cannot be interpreted or even give hints for further investigations.Understanding data variability by the use of inferential methods will help us adopt the most appropriate classification technique and accordingly enhance both building more robust CRS and CDSS's.


Rana F. Ghani; Ahmed T. Sadik

Journal of University of Anbar for Pure Science, 2008, Volume 2, Issue 1, Pages 186-194
DOI: 10.37652/juaps.2008.15462

The aim of this paper is to improve the Dijkstra algorithm which is widely used in the internet routing. Quantum computing approach is used to improve the work of Dijkstra algorithm for network routing by exploiting the massive parallelism existing in the quantum environment and to deal with the demands of continuous growing of the internet. This algorithm is compared according to the number of iterations and time complexity with Dijkstra’s algorithm and the result shows that the quantum approach is better in finding the optimal path with better time complexity when it is implemented in quantum computer.