Nikola Zlatanov - Academia.edu (original) (raw)

Videos by Nikola Zlatanov

History in the making

4 views

How we Make Possible

1 views

Conference Presentations by Nikola Zlatanov

Research paper thumbnail of Advanced Plasma Processing for Semiconductor Manufacturing

In the etching and deposition steps in the production of semiconductor chips, plasma processing i... more In the etching and deposition steps in the production of semiconductor chips, plasma processing is required for three main reasons. First, electrons are used to dissociate the input gas into atoms. Second, the etch rate is greatly enhanced by ion bombardment, which breaks the bonds in the first few monolayers of the surface, allowing the etchant atoms, usually Cl or F, to combine with substrate atoms to form volatile molecules. And third, most importantly, the electric field of the plasma sheath straightens the orbits of the bombarding ions so that the etching is anisotropic, allowing the creation of features approaching nanometer dimensions. The plasma sources used in the semiconductor industry were originally developed by trial and error, with little basic understanding of how they work. To achieve this understanding, many challenging physics problems had to be solved. This chapter is an introduction to the science of radiofrequency (RF) plasma sources, which are by far the most common. Sources operating at zero or other frequencies, such as 2.45 GHz microwaves, lie outside our scope. Most RF sources use the 13.56 MHz industrial standard frequency. Among these, there are three main types: (1) capacitively coupled plasmas or CCPs, also called reactive ion etchers (RIEs); (2) inductively coupled plasmas (ICPs), also called transformer coupled plasmas (TCPs); and (3) helicon wave sources, which are new and can be called HWSs.

Research paper thumbnail of Optical Communications and Amplifiers

Fiber-optic communication is a method of transmitting information from one place to another by se... more Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optics have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world. Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Researchers at Bell Labs have reached internet speeds of over 100 petabit.kilometer per second using fiber-optic communication. Modern fiber-optic communication systems generally include an optical transmitter to convert an electrical signal into an optical signal to send into the optical fiber, a cable containing bundles of multiple optical fibers that is routed through underground conduits and buildings, multiple kinds of amplifiers, and an optical receiver to recover the signal as an electrical signal. The information transmitted is typically digital information generated by computers, telephone systems, and cable television companies.

Research paper thumbnail of Introduction to Fiber Optics Theory

The field of applied science and engineering concerned with the design and application of optical... more The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. Optical fibers typically include a transparent core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).
An important aspect of a fiber optic communication is that of extension of the fiber optic cables such that the losses brought about by joining two different cables is kept to a minimum. Joining lengths of optical fiber often proves to be more complex than joining electrical wire or cable and involves careful cleaving of the fibers, perfect alignment of the fiber cores, and the splicing of these aligned fiber cores.
For applications that demand a permanent connection a mechanical splice which holds the ends of the fibers together mechanically could be used or a fusion splice that uses heat to fuse the ends of the fibers together could be used. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors.

Research paper thumbnail of CPU vs. SOC – The battle for the future of computing

Ultimately, SoCs are the next step after CPUs. Eventually, SoCs will almost completely consume CP... more Ultimately, SoCs are the next step after CPUs. Eventually, SoCs will almost completely consume CPUs. We are already seeing this with AMD’s Llano and Intel’s Ivy Bridge CPUs, which integrate a memory controller, PCI Express, and a graphics processor onto the same chip. There will always be a market for general purpose CPUs, especially where power and footprint are less of an issue (such as supercomputers). Mobile and wearable devices are the future of computers, though, and so are SoCs. This battle only applies to the mobile market though, and maybe things like integrated boards for media centers and such, but certainly not for desktops, let alone servers. CPUs are, and will always be the powerful bricks of horsepower that are at the foundation of powerful system, SoCs are fit for mobile computing and integrated computing, but they simply can't keep up with the powerful x86 based CPUs. Sure, more and more parts are integrated into CPUs, but that's completely different from SoCs, with a CPU you're asking yourself questions like 'whether or not to integrate a memory controller and maybe simple GPU', with SoCs you're asking yourself questions like 'whether or not to integrate a secondary or tertiary communications subsystem for Wi-Fi, 3/4G, Bluetooth, or the complete memory'. We just have new peripherals. Different things will eventually be added to the chip die as they become standards, and taken off if they are no longer needed. Also, as chips become more advanced, they will be able to cover more functions as well. HD video used to need far too much power to integrate onto a chip with other functions, and now, it comes standard. SoC is just an old concept with a new name.
So the newest integrated graphics will satisfy 80-90 percent of the market. From there, the integrated vs. discrete graphics will go up and down for a while with discrete graphics slowly losing market share until it is just a niche item for certain professionals, and a few gamers with very extreme setups. That point will probably entail an integrated graphic solution getting to the point where it display 4K resolution at 16 times the polygon fill rate of a PS3 or Xbox360. I would estimate in the 10-15 year range, but by then, desktops themselves will be a fairly niche item. At the end, this is "Smart Devices vs. Large Computers" or otherwise "Integrated Motherboards vs. Multi-Applicable Mother Boards”. If a CPU is forcibly soldered into a motherboard still needing the same components as before, it will be more advantageous. Apple and many other companies have been doing this to their computers for so long it hardly makes sense. That's why "Mac vs. PCs" are still a battle, if you want a company that does it all for you? Or want a company that offers it all and you can choose what you want from it and another company. This article has nothing to do with the disappearance of old computer parts it's literally just how we organize them.

Research paper thumbnail of Radiation Safety System and Control Architecture in Ion Implanter

The systems that monitor, control, and/or mitigate the radiation hazard can include passive eleme... more The systems that monitor, control, and/or mitigate the radiation hazard can include passive elements (e.g., shielding, fence), active elements (e.g., interlocked access control, beam interlocks, or radiation interlocks), and administrative elements (e.g., ropes and signs, area access locks, search procedure, operating policies and procedures). A Radiation Safety System (RSS), consisting of an array of passive and active safety elements, may be required to reduce the prompt radiation hazard. The RSS can include two complementary systems: the Access Control System (ACS) and the Radiation Control System (RCS). The ACS keeps people away from radiation hazards by controlling and limiting personnel access to prompt radiation hazards inside accelerator housing or shielding. The RCS keeps radiation hazards away from people by using passive elements (e.g., shielding or fence) and/or active elements (e.g., beam and radiation monitoring/limiting devices).
The Control system involves all the hardware and software needed to manage the ion source. It implies either its hierarchical structure or local control. A summary of the devices to be controlled is: The ion source core (plasma chamber’s coils positions and currents, flow and repeller’s position and current) and the RF system (RF pulse, Klystron power and ATU’s power matching). Those are the systems required to form and extract the beam. In addition, the auxiliary systems (cooling, electrical installation, etc.) guarantee the operability and the beam diagnostics measure the beam characteristics.

Research paper thumbnail of The Kernel Boot Process

This article is about booting at the details of the Kernel to see how an operating system starts ... more This article is about booting at the details of the Kernel to see how an operating system starts life after computers boot up right up to the point where the boot loader, after stuffing the Kernel image into memory, is about to jump into the Kernel entry point. In computing, the Kernel is a computer program that manages input/output requests from software, and translates them into data processing instructions for the central processing unit and other electronic components of a computer. The Kernel is a fundamental part of a modern computer's operating system. A Kernel connects the application software to the hardware of a computer The critical code of the Kernel is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by applications. The Kernel performs its tasks, such as executing processes and handling interrupts, in Kernel space, whereas everything a user normally does, such as writing text in a text editor or running programs in a GUI (graphical user interface), is done in user space. This separation prevents user data and Kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable (and possibly crashing). When a process makes requests of the Kernel, the request is called a system call. Various Kernel designs differ in how they manage system calls and resources. For example, a monolithic Kernel executes all the operating system instructions in the same address space in order to improve the performance of the system. A microKernel runs most of the operating system's background processes in user space, to make the operating system more modular and, therefore, easier to maintain.

Research paper thumbnail of The data center evolution from Mainframe to Cloud

Cloud computing did not kill the mainframe. The disruptive technology did, however, it caused the... more Cloud computing did not kill the mainframe. The disruptive technology did, however, it caused the mainframe to evolve. The Cloud is not a Mainframe though. Moreover, the Mainframe is not a Super Computer too. Mainframe The mainframe computer is an age-old legend. They have been around since the start of computing, and they continue to exist in upgraded form today. However, in the face of cloud computing, mainframes look like they will fully recede to the very niche market they resided in during the age of the dawn of computing. The biggest advantage of mainframes right now is that you already own one. If you do not already own one, there is almost no reason to invest into one, as the solutions provided by cloud computing are often much more cost effective in almost every situation. One benefit large companies enjoy about mainframes is the 100% complete control over their own data. When using cloud services, you trust a third party company to not touch your data. With mainframes, you never need to worry about them snooping or touching your data. However, most large cloud companies are quite trustworthy and the chances of them doing something you do not want them to be quite small. However, if you already do own one, there are definitely reasons to keep it. The cost of getting hundreds of thousands of lines of code transferred over would probably alone outweigh the benefits of switching to cloud. Also, mainframes have the capability to be customized and specialized more than cloud services can, as the hardware itself is in control of the user. Mainframe computers can have nothing to do with your internet connection, which is good because it reduces bandwidth being used and allows for easy use even when the internet is down.

Research paper thumbnail of Semiconductor Device Fabrication Technology

Most digital designers will never be confronted with the details of the manufacturing process tha... more Most digital designers will never be confronted with the details of the manufacturing process that lies at the core of the semiconductor revolution. Yet, some insight in the steps that lead to an operational silicon chip comes in quite handy in understanding the physical constraints that are imposed on a designer of an integrated circuit, as well as the impact of the fabrication process on issues such as cost. In this chapter, we briefly describe the steps and techniques used in a modern integrated circuit manufacturing process. It is not our aim to present a detailed description of the fabrication technology, which easily deserves a complete course [Plummer00]. Rather we aim at presenting the general outline of the flow and the interaction between the various steps. We learn that a set of optical masks forms the central interface between the intrinsics of the manufacturing process and the design that the user wants to see transferred to the silicon fabric. The masks define the patterns that, when transcribed onto the different layers of the semiconductor material, form the elements of the electronic devices and the interconnecting wires. As such, these patterns have to adhere to some constraints in terms of minimum width and separation if the resulting circuit is to be fully functional. This collection of constraints is called the design rule set, and acts as the contract between the circuit designer and the process engineer. If the designer adheres to these rules, he gets a guarantee that his circuit will be manufacturable. An overview of the common design rules, encountered in modern CMOS processes, will be given. Finally, an overview is given of the IC packaging options. The package forms the interface between the circuit implemented on the silicon die and the outside world, and as such has a major impact on the performance, reliability, longevity, and cost of the integrated circuit. 2.2 Manufacturing CMOS Integrated Circuits A simplified cross section of a typical CMOS inverter is shown in Figure 2.1. The CMOS process requires that both n-channel (NMOS) and p-channel (PMOS) transistors be built in the same silicon material. To accommodate both types of devices, special regions called wells must be created in which the semiconductor material is opposite to the type of the channel. A PMOS transistor has to be created in either an n-type substrate or an n-well, while an NMOS device resides in either a p-type substrate or a p-well.

Research paper thumbnail of SCSI Drives and RAID Arrays Functionality

What is SCSI A computer is full of busses-highways that take information and power from one place... more What is SCSI A computer is full of busses-highways that take information and power from one place to another. For example, when you plug an MP3 player or digital camera into your computer, you're probably using an universal serial bus (USB) port. Your USB port is good at carrying the data and electricity required for small electronic devices that do things like create and store pictures and music files. But that bus isn't big enough to support a whole computer, a server or lots of devices simultaneously. For that, you would need something more like SCSI. SCSI originally stood for Small Computer System Interface, but it's really outgrown the "small" designation. It's a fast bus that can connect lots of devices to a computer at the same time, including hard drives, scanners, CD-ROM/RW drives, printers and tape drives. Other technologies, like serial-ATA (SATA), have largely replaced it in new systems, but SCSI is still in use. This article will review SCSI basics and give you lots of information on SCSI types and specifications. SCSI Basics SCSI connector SCSI is based on an older, proprietary bus interface called Shugart Associates System Interface (SASI). SASI was originally developed in 1981 by Shugart Associates in conjunction with NCR Corporation. In 1986, the American National Standards Institute (ANSI) ratified SCSI (pronounced "scuzzy"), a modified version of SASI. SCSI uses a controller to send and receive data and power to SCSI-enabled devices, like hard drives and printers. SCSI has several benefits. It's fairly fast, up to 320 megabytes per second (MBps). It's been around for more than 20 years and it's been thoroughly tested, so it has a reputation for being reliable. Like Serial ATA and FireWire, it lets you put multiple items on one bus. SCSI also works with most computer systems. However, SCSI also has some potential problems. It has limited system BIOS support, and it has to be configured for each computer. There's also no common SCSI software interface. Finally, all the different SCSI types have different speeds, bus widths and connectors, which can be confusing. When you know the meaning behind "Fast," "Ultra" and "Wide," though, it's pretty easy to understand. We'll look at these SCSI types next. Single Ended Parallel SCSI icon

Research paper thumbnail of Programmable Logic Devices and Embedded Systems

A quiet revolution is taking place. Over the past decade, the density of the average programmable... more A quiet revolution is taking place. Over the past decade, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 20,000,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-on-a-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design. As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely, no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. Therefore, language differences alone are not enough of a distinction. Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software? I am not convinced that an unambiguous distinction between hardware and software can ever be found, but I do not think that matters all that much. Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic. Types of programmable logic Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size, there is also much variation in architecture. In this section, I will introduce you to the most common types of programmable logic and highlight the most important features of each type.

Research paper thumbnail of PCB Design Process and Fabrication Challenges

Virtually every electronic product is constructed with one or more printed-circuit boards (PCBs).... more Virtually every electronic product is constructed with one or more printed-circuit boards (PCBs). The PCBs hold the ICs and other components and implement the interconnections between them. PCBs are created in abundance for portable electronics, computers, and entertainment equipment. They are also made for test equipment, manufacturing, and spacecraft. Eventually, almost every EE must design a PCB, which is not something that is taught in school. Yet engineers, technicians, and even novice PCB designers can create high-quality PCBs for any and every purpose with confidence that the outcome will meet or exceed the objective. Also, these designs can be completed on schedule and within budget while meeting the design requirements. Designers just need to mind the essential documentation, design steps and strategies, and final checks. The Basic Design Process The ideal PCB design starts with the discovery that a PCB is needed and continues through the final production boards (Fig. 1). After determining why the PCB is needed, the product's final concept should be decided. The concept includes the design's features, the functions the PCB must have and perform, interconnection with other circuits, placement, and the approximate final dimensions. Fig. 1. The ideal PCB design flow begins when designers recognize a need that must be fulfilled, and it does not end until testing verifies that the design can meet those needs.

Research paper thumbnail of High-Brightness LED Application Theory and Challenges

LEDs are the most efficient way to turn an electric current into illumination. When a current flo... more LEDs are the most efficient way to turn an electric current into illumination. When a current flows through a diode in the forward direction, it consists of surplus electrons moving in one direc¬tion in the lattice and “holes” (voids in the lattice) moving in the other. Occasionally, electrons can recombine with holes. When they do, the process releases energy in the form of photons.
This is true of all semiconductor junctions, but LEDs use materials that maximize the effect. The color of the light emitted (corresponding to the ener¬gy of the photon) is determined by the semiconduc¬tor materials that form the diode junction.
The latest high-brightness (HB) white LEDs are made possible by semiconductor materials that produce blue or ultraviolet photons. In addition to the diode, an HB package contains “yellow” phos¬phors on the inside of its lens. Some “blue” photons escape, but others excite the phosphors, which then give off “yellow” photons. The result can be tuned in manufacturing to produce “white” light.
Lots of engineering relates to controlling the qual¬ity of this light, with several ways to interconnect multiple LEDs to increase and manage light output. The general approach is to drive series strings with a constant current, but there are subtleties to interfac¬ing the drivers with AC supplies and control schemes.
Light and lighting represent basic and crucial elements in the life of humankind. The pursuit of new lighting sources has been a trend of our civilization. This pursuit is generally driven by technological advancements, needs, challenges, and, sometimes, by luxury. Now that we are waking up to realize the consequences of abusing our world’s limited resources, the push towards energy conservation has come to be a mandate, not a choice. Therefore, our world’s current challenge is how to balance between the needs of our modern, possibly spoiled, lifestyle and the necessity to ‘go green’. When it comes to lighting, it is quite easy to imagine the impact of globally improving the efficiency of lighting sources by 10%. But what if it could be improved by 1000%? The use of newly enhanced Light Emitting Diodes (LEDs) as lighting sources has the potential to achieve these efficiency improvements while maintaining outstanding performance and reliability that supersede many of the currently used sources. Part One of this two part series sheds some light on the basics of LEDs physical structure, colors, efficiency, applications, and drivers.

Research paper thumbnail of Hard Disk Drive and Disk Encryption

A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage device used for st... more A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage device used for storing and retrieving digital information using one or more rigid ("hard") rapidly rotating disks (platters) coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. [2] Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile memory, retaining stored data even when powered off. Introduced by IBM in 1956, [3] HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDD units, though most current units are manufactured by Seagate, Toshiba and Western Digital. As of 2015, HDD production (exabytes per year) and areal density are growing, although unit shipments are declining. The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Performance is specified by the time required to move the heads to a track or cylinder (average access time) plus the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate). The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial attached SCSI) cables. As of 2016, the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSDs), which have higher data transfer rates, better reliability, [4] and significantly lower latency and access times, but HDDs remain the dominant medium for secondary storage due to advantages in price per bit. [5][6] However, SSDs are replacing HDDs where speed, power consumption and durability are more important considerations. [7][8] Hybrid drive products have been available since 2007. [9] These are a combination of HDD and SSD technology in a single device, also known by the initialism SSHD.

Research paper thumbnail of Dynamic Memory Allocation and Fragmentation

In C and C++, it can be very convenient to allocate and de-allocate blocks of memory as and when ... more In C and C++, it can be very convenient to allocate and de-allocate blocks of memory as and when needed. This is certainly standard practice in both languages and almost unavoidable in C++. However, the handling of such dynamic memory can be problematic and inefficient. For desktop applications, where memory is freely available, these difficulties can be ignored. For real-time embedded systems, ignoring the issues is not an option.
Dynamic memory allocation tends to be non-deterministic; the time taken to allocate memory may not be predictable and the memory pool may become fragmented, resulting in unexpected allocation failures. In this paper the problems will be outlined in detail. Facilities in the Nucleus RTOS for handling dynamic memory are outlined and an approach to deterministic dynamic memory allocation detailed.

Research paper thumbnail of Disruptive Technologies and the Society

The pre-technological period, in which all other animal species remain today was a non-rational p... more The pre-technological period, in which all other animal species remain today was a non-rational period of the early prehistoric man. The emergence of technology, made possible by the development of the rational faculty, paved the way for the first stage: the tool. A tool provides a mechanical advantage in accomplishing a physical task, arrow, plow, or hammer that augments physical labor to more efficiently achieve this objective. Later animal-powered tools such as the plow and the horse, increased the productivity of food production about tenfold over the technology of the hunter-gatherers. Tools allow one to do things impossible to accomplish with one's body alone, such as seeing minute visual detail with a microscope, manipulating heavy objects with a pulley and cart or carrying volumes of water in a bucket. The second technological stage was the creation of the machine. A machine is a tool that substitutes the element of human physical effort, and requires only to control its function. Machines became widespread with the industrial revolution. Examples of this include cars, trains, computers and lights. Machines allow humans to tremendously exceed the limitations of their bodies. Putting a machine on the farm, a tractor, increased food productivity at least tenfold over the technology of the plow and the horse. The third, and final stage of technological evolution is the automation. The automation is a machine that removes the element of human control with an automatic algorithm. Examples of machines that exhibit this characteristic are digital watches, automatic telephone switches, pacemakers and computer programs.

Research paper thumbnail of Design of an Open-Source SATA Core

Serial ATA is a peripheral interface created in 2003 to replace Parallel ATA, also known as IDE. ... more Serial ATA is a peripheral interface created in 2003 to replace Parallel ATA, also
known as IDE. Hard drive speeds were getting faster, and would soon outpace the
capabilities of the older standard—the fastest PATA speed achieved was 133MB/s, while
SATA began at 150MB/s and was designed with future performance in mind [2]. Also,
newer silicon technologies used lower voltages than PATA's 5V minimum. The ribbon
cables used for PATA were also a problem; they were wide and blocked air flow, had a
short maximum length restriction, and required many pins and signal lines [2].
SATA has a number of features that make it superior to Parallel ATA. The
signaling voltages are low and the cables and connectors are very small. SATA has
outpaced hard drive performance, so the interface is not a bottleneck in a system. It also
has a number of new features, including hot-plug support.
SATA is a point-to-point architecture, where each SATA link contains only two
devices: a SATA host (typically a computer) and the storage device. If a system requires
multiple storage devices, each SATA link is maintained separately. This simplifies the
protocol and allows each storage device to utilize the full capabilities of the bus
simultaneously, unlike in the PATA architecture where the bus is shared.
To ease the transition to the new standard, SATA maintains backward
compatibility with PATA. To do this, the Host Bus Adapter (HBA) maintains a set of
shadow registers that mimic the registers used by PATA. The disk also maintains a set of
these registers. When a register value is changed, the register set is sent across the serial
line to keep both sets of registers synchronized. This allows for the software drivers to be
agnostic about the interface being used.

Research paper thumbnail of Computer Security and Mobile Security Challenges

Computer security, also known as cybersecurity or IT security, is the protection of information s... more Computer security, also known as cybersecurity or IT security, is the protection of information systems from theft or damage to the hardware, the software, and to the information on them, as well as from disruption or misdirection of the services they provide.[1] It includes controlling physical access to the hardware, as well as protecting against harm that may come via network access, data and code injection,[2] and due to malpractice by operators, whether intentional, accidental, or due to them being tricked into deviating from secure procedures.[3]
The field is of growing importance due to the increasing reliance on computer systems in most societies.[4] Computer systems now include a very wide variety of "smart" devices, including smartphones, televisions and tiny devices as part of the Internet of Things – and networks include not only the Internet and private data networks, but also Bluetooth, Wi-Fi and other wireless networks.

Research paper thumbnail of Computer Busses, Ports and Peripheral Devices

In computer architecture, a bus (related to the Latin "omnibus", meaning "for all") is a communic... more In computer architecture, a bus (related to the Latin "omnibus", meaning "for all") is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols. Early computer buses were parallel electrical wires with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in case of USB.

History in the making

4 views

How we Make Possible

1 views

Research paper thumbnail of Advanced Plasma Processing for Semiconductor Manufacturing

In the etching and deposition steps in the production of semiconductor chips, plasma processing i... more In the etching and deposition steps in the production of semiconductor chips, plasma processing is required for three main reasons. First, electrons are used to dissociate the input gas into atoms. Second, the etch rate is greatly enhanced by ion bombardment, which breaks the bonds in the first few monolayers of the surface, allowing the etchant atoms, usually Cl or F, to combine with substrate atoms to form volatile molecules. And third, most importantly, the electric field of the plasma sheath straightens the orbits of the bombarding ions so that the etching is anisotropic, allowing the creation of features approaching nanometer dimensions. The plasma sources used in the semiconductor industry were originally developed by trial and error, with little basic understanding of how they work. To achieve this understanding, many challenging physics problems had to be solved. This chapter is an introduction to the science of radiofrequency (RF) plasma sources, which are by far the most common. Sources operating at zero or other frequencies, such as 2.45 GHz microwaves, lie outside our scope. Most RF sources use the 13.56 MHz industrial standard frequency. Among these, there are three main types: (1) capacitively coupled plasmas or CCPs, also called reactive ion etchers (RIEs); (2) inductively coupled plasmas (ICPs), also called transformer coupled plasmas (TCPs); and (3) helicon wave sources, which are new and can be called HWSs.

Research paper thumbnail of Optical Communications and Amplifiers

Fiber-optic communication is a method of transmitting information from one place to another by se... more Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optics have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world. Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Researchers at Bell Labs have reached internet speeds of over 100 petabit.kilometer per second using fiber-optic communication. Modern fiber-optic communication systems generally include an optical transmitter to convert an electrical signal into an optical signal to send into the optical fiber, a cable containing bundles of multiple optical fibers that is routed through underground conduits and buildings, multiple kinds of amplifiers, and an optical receiver to recover the signal as an electrical signal. The information transmitted is typically digital information generated by computers, telephone systems, and cable television companies.

Research paper thumbnail of Introduction to Fiber Optics Theory

The field of applied science and engineering concerned with the design and application of optical... more The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. Optical fibers typically include a transparent core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).
An important aspect of a fiber optic communication is that of extension of the fiber optic cables such that the losses brought about by joining two different cables is kept to a minimum. Joining lengths of optical fiber often proves to be more complex than joining electrical wire or cable and involves careful cleaving of the fibers, perfect alignment of the fiber cores, and the splicing of these aligned fiber cores.
For applications that demand a permanent connection a mechanical splice which holds the ends of the fibers together mechanically could be used or a fusion splice that uses heat to fuse the ends of the fibers together could be used. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors.

Research paper thumbnail of CPU vs. SOC – The battle for the future of computing

Ultimately, SoCs are the next step after CPUs. Eventually, SoCs will almost completely consume CP... more Ultimately, SoCs are the next step after CPUs. Eventually, SoCs will almost completely consume CPUs. We are already seeing this with AMD’s Llano and Intel’s Ivy Bridge CPUs, which integrate a memory controller, PCI Express, and a graphics processor onto the same chip. There will always be a market for general purpose CPUs, especially where power and footprint are less of an issue (such as supercomputers). Mobile and wearable devices are the future of computers, though, and so are SoCs. This battle only applies to the mobile market though, and maybe things like integrated boards for media centers and such, but certainly not for desktops, let alone servers. CPUs are, and will always be the powerful bricks of horsepower that are at the foundation of powerful system, SoCs are fit for mobile computing and integrated computing, but they simply can't keep up with the powerful x86 based CPUs. Sure, more and more parts are integrated into CPUs, but that's completely different from SoCs, with a CPU you're asking yourself questions like 'whether or not to integrate a memory controller and maybe simple GPU', with SoCs you're asking yourself questions like 'whether or not to integrate a secondary or tertiary communications subsystem for Wi-Fi, 3/4G, Bluetooth, or the complete memory'. We just have new peripherals. Different things will eventually be added to the chip die as they become standards, and taken off if they are no longer needed. Also, as chips become more advanced, they will be able to cover more functions as well. HD video used to need far too much power to integrate onto a chip with other functions, and now, it comes standard. SoC is just an old concept with a new name.
So the newest integrated graphics will satisfy 80-90 percent of the market. From there, the integrated vs. discrete graphics will go up and down for a while with discrete graphics slowly losing market share until it is just a niche item for certain professionals, and a few gamers with very extreme setups. That point will probably entail an integrated graphic solution getting to the point where it display 4K resolution at 16 times the polygon fill rate of a PS3 or Xbox360. I would estimate in the 10-15 year range, but by then, desktops themselves will be a fairly niche item. At the end, this is "Smart Devices vs. Large Computers" or otherwise "Integrated Motherboards vs. Multi-Applicable Mother Boards”. If a CPU is forcibly soldered into a motherboard still needing the same components as before, it will be more advantageous. Apple and many other companies have been doing this to their computers for so long it hardly makes sense. That's why "Mac vs. PCs" are still a battle, if you want a company that does it all for you? Or want a company that offers it all and you can choose what you want from it and another company. This article has nothing to do with the disappearance of old computer parts it's literally just how we organize them.

Research paper thumbnail of Radiation Safety System and Control Architecture in Ion Implanter

The systems that monitor, control, and/or mitigate the radiation hazard can include passive eleme... more The systems that monitor, control, and/or mitigate the radiation hazard can include passive elements (e.g., shielding, fence), active elements (e.g., interlocked access control, beam interlocks, or radiation interlocks), and administrative elements (e.g., ropes and signs, area access locks, search procedure, operating policies and procedures). A Radiation Safety System (RSS), consisting of an array of passive and active safety elements, may be required to reduce the prompt radiation hazard. The RSS can include two complementary systems: the Access Control System (ACS) and the Radiation Control System (RCS). The ACS keeps people away from radiation hazards by controlling and limiting personnel access to prompt radiation hazards inside accelerator housing or shielding. The RCS keeps radiation hazards away from people by using passive elements (e.g., shielding or fence) and/or active elements (e.g., beam and radiation monitoring/limiting devices).
The Control system involves all the hardware and software needed to manage the ion source. It implies either its hierarchical structure or local control. A summary of the devices to be controlled is: The ion source core (plasma chamber’s coils positions and currents, flow and repeller’s position and current) and the RF system (RF pulse, Klystron power and ATU’s power matching). Those are the systems required to form and extract the beam. In addition, the auxiliary systems (cooling, electrical installation, etc.) guarantee the operability and the beam diagnostics measure the beam characteristics.

Research paper thumbnail of The Kernel Boot Process

This article is about booting at the details of the Kernel to see how an operating system starts ... more This article is about booting at the details of the Kernel to see how an operating system starts life after computers boot up right up to the point where the boot loader, after stuffing the Kernel image into memory, is about to jump into the Kernel entry point. In computing, the Kernel is a computer program that manages input/output requests from software, and translates them into data processing instructions for the central processing unit and other electronic components of a computer. The Kernel is a fundamental part of a modern computer's operating system. A Kernel connects the application software to the hardware of a computer The critical code of the Kernel is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by applications. The Kernel performs its tasks, such as executing processes and handling interrupts, in Kernel space, whereas everything a user normally does, such as writing text in a text editor or running programs in a GUI (graphical user interface), is done in user space. This separation prevents user data and Kernel data from interfering with each other and thereby diminishing performance or causing the system to become unstable (and possibly crashing). When a process makes requests of the Kernel, the request is called a system call. Various Kernel designs differ in how they manage system calls and resources. For example, a monolithic Kernel executes all the operating system instructions in the same address space in order to improve the performance of the system. A microKernel runs most of the operating system's background processes in user space, to make the operating system more modular and, therefore, easier to maintain.

Research paper thumbnail of The data center evolution from Mainframe to Cloud

Cloud computing did not kill the mainframe. The disruptive technology did, however, it caused the... more Cloud computing did not kill the mainframe. The disruptive technology did, however, it caused the mainframe to evolve. The Cloud is not a Mainframe though. Moreover, the Mainframe is not a Super Computer too. Mainframe The mainframe computer is an age-old legend. They have been around since the start of computing, and they continue to exist in upgraded form today. However, in the face of cloud computing, mainframes look like they will fully recede to the very niche market they resided in during the age of the dawn of computing. The biggest advantage of mainframes right now is that you already own one. If you do not already own one, there is almost no reason to invest into one, as the solutions provided by cloud computing are often much more cost effective in almost every situation. One benefit large companies enjoy about mainframes is the 100% complete control over their own data. When using cloud services, you trust a third party company to not touch your data. With mainframes, you never need to worry about them snooping or touching your data. However, most large cloud companies are quite trustworthy and the chances of them doing something you do not want them to be quite small. However, if you already do own one, there are definitely reasons to keep it. The cost of getting hundreds of thousands of lines of code transferred over would probably alone outweigh the benefits of switching to cloud. Also, mainframes have the capability to be customized and specialized more than cloud services can, as the hardware itself is in control of the user. Mainframe computers can have nothing to do with your internet connection, which is good because it reduces bandwidth being used and allows for easy use even when the internet is down.

Research paper thumbnail of Semiconductor Device Fabrication Technology

Most digital designers will never be confronted with the details of the manufacturing process tha... more Most digital designers will never be confronted with the details of the manufacturing process that lies at the core of the semiconductor revolution. Yet, some insight in the steps that lead to an operational silicon chip comes in quite handy in understanding the physical constraints that are imposed on a designer of an integrated circuit, as well as the impact of the fabrication process on issues such as cost. In this chapter, we briefly describe the steps and techniques used in a modern integrated circuit manufacturing process. It is not our aim to present a detailed description of the fabrication technology, which easily deserves a complete course [Plummer00]. Rather we aim at presenting the general outline of the flow and the interaction between the various steps. We learn that a set of optical masks forms the central interface between the intrinsics of the manufacturing process and the design that the user wants to see transferred to the silicon fabric. The masks define the patterns that, when transcribed onto the different layers of the semiconductor material, form the elements of the electronic devices and the interconnecting wires. As such, these patterns have to adhere to some constraints in terms of minimum width and separation if the resulting circuit is to be fully functional. This collection of constraints is called the design rule set, and acts as the contract between the circuit designer and the process engineer. If the designer adheres to these rules, he gets a guarantee that his circuit will be manufacturable. An overview of the common design rules, encountered in modern CMOS processes, will be given. Finally, an overview is given of the IC packaging options. The package forms the interface between the circuit implemented on the silicon die and the outside world, and as such has a major impact on the performance, reliability, longevity, and cost of the integrated circuit. 2.2 Manufacturing CMOS Integrated Circuits A simplified cross section of a typical CMOS inverter is shown in Figure 2.1. The CMOS process requires that both n-channel (NMOS) and p-channel (PMOS) transistors be built in the same silicon material. To accommodate both types of devices, special regions called wells must be created in which the semiconductor material is opposite to the type of the channel. A PMOS transistor has to be created in either an n-type substrate or an n-well, while an NMOS device resides in either a p-type substrate or a p-well.

Research paper thumbnail of SCSI Drives and RAID Arrays Functionality

What is SCSI A computer is full of busses-highways that take information and power from one place... more What is SCSI A computer is full of busses-highways that take information and power from one place to another. For example, when you plug an MP3 player or digital camera into your computer, you're probably using an universal serial bus (USB) port. Your USB port is good at carrying the data and electricity required for small electronic devices that do things like create and store pictures and music files. But that bus isn't big enough to support a whole computer, a server or lots of devices simultaneously. For that, you would need something more like SCSI. SCSI originally stood for Small Computer System Interface, but it's really outgrown the "small" designation. It's a fast bus that can connect lots of devices to a computer at the same time, including hard drives, scanners, CD-ROM/RW drives, printers and tape drives. Other technologies, like serial-ATA (SATA), have largely replaced it in new systems, but SCSI is still in use. This article will review SCSI basics and give you lots of information on SCSI types and specifications. SCSI Basics SCSI connector SCSI is based on an older, proprietary bus interface called Shugart Associates System Interface (SASI). SASI was originally developed in 1981 by Shugart Associates in conjunction with NCR Corporation. In 1986, the American National Standards Institute (ANSI) ratified SCSI (pronounced "scuzzy"), a modified version of SASI. SCSI uses a controller to send and receive data and power to SCSI-enabled devices, like hard drives and printers. SCSI has several benefits. It's fairly fast, up to 320 megabytes per second (MBps). It's been around for more than 20 years and it's been thoroughly tested, so it has a reputation for being reliable. Like Serial ATA and FireWire, it lets you put multiple items on one bus. SCSI also works with most computer systems. However, SCSI also has some potential problems. It has limited system BIOS support, and it has to be configured for each computer. There's also no common SCSI software interface. Finally, all the different SCSI types have different speeds, bus widths and connectors, which can be confusing. When you know the meaning behind "Fast," "Ultra" and "Wide," though, it's pretty easy to understand. We'll look at these SCSI types next. Single Ended Parallel SCSI icon

Research paper thumbnail of Programmable Logic Devices and Embedded Systems

A quiet revolution is taking place. Over the past decade, the density of the average programmable... more A quiet revolution is taking place. Over the past decade, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 20,000,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-on-a-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design. As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely, no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. Therefore, language differences alone are not enough of a distinction. Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software? I am not convinced that an unambiguous distinction between hardware and software can ever be found, but I do not think that matters all that much. Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic. Types of programmable logic Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size, there is also much variation in architecture. In this section, I will introduce you to the most common types of programmable logic and highlight the most important features of each type.

Research paper thumbnail of PCB Design Process and Fabrication Challenges

Virtually every electronic product is constructed with one or more printed-circuit boards (PCBs).... more Virtually every electronic product is constructed with one or more printed-circuit boards (PCBs). The PCBs hold the ICs and other components and implement the interconnections between them. PCBs are created in abundance for portable electronics, computers, and entertainment equipment. They are also made for test equipment, manufacturing, and spacecraft. Eventually, almost every EE must design a PCB, which is not something that is taught in school. Yet engineers, technicians, and even novice PCB designers can create high-quality PCBs for any and every purpose with confidence that the outcome will meet or exceed the objective. Also, these designs can be completed on schedule and within budget while meeting the design requirements. Designers just need to mind the essential documentation, design steps and strategies, and final checks. The Basic Design Process The ideal PCB design starts with the discovery that a PCB is needed and continues through the final production boards (Fig. 1). After determining why the PCB is needed, the product's final concept should be decided. The concept includes the design's features, the functions the PCB must have and perform, interconnection with other circuits, placement, and the approximate final dimensions. Fig. 1. The ideal PCB design flow begins when designers recognize a need that must be fulfilled, and it does not end until testing verifies that the design can meet those needs.

Research paper thumbnail of High-Brightness LED Application Theory and Challenges

LEDs are the most efficient way to turn an electric current into illumination. When a current flo... more LEDs are the most efficient way to turn an electric current into illumination. When a current flows through a diode in the forward direction, it consists of surplus electrons moving in one direc¬tion in the lattice and “holes” (voids in the lattice) moving in the other. Occasionally, electrons can recombine with holes. When they do, the process releases energy in the form of photons.
This is true of all semiconductor junctions, but LEDs use materials that maximize the effect. The color of the light emitted (corresponding to the ener¬gy of the photon) is determined by the semiconduc¬tor materials that form the diode junction.
The latest high-brightness (HB) white LEDs are made possible by semiconductor materials that produce blue or ultraviolet photons. In addition to the diode, an HB package contains “yellow” phos¬phors on the inside of its lens. Some “blue” photons escape, but others excite the phosphors, which then give off “yellow” photons. The result can be tuned in manufacturing to produce “white” light.
Lots of engineering relates to controlling the qual¬ity of this light, with several ways to interconnect multiple LEDs to increase and manage light output. The general approach is to drive series strings with a constant current, but there are subtleties to interfac¬ing the drivers with AC supplies and control schemes.
Light and lighting represent basic and crucial elements in the life of humankind. The pursuit of new lighting sources has been a trend of our civilization. This pursuit is generally driven by technological advancements, needs, challenges, and, sometimes, by luxury. Now that we are waking up to realize the consequences of abusing our world’s limited resources, the push towards energy conservation has come to be a mandate, not a choice. Therefore, our world’s current challenge is how to balance between the needs of our modern, possibly spoiled, lifestyle and the necessity to ‘go green’. When it comes to lighting, it is quite easy to imagine the impact of globally improving the efficiency of lighting sources by 10%. But what if it could be improved by 1000%? The use of newly enhanced Light Emitting Diodes (LEDs) as lighting sources has the potential to achieve these efficiency improvements while maintaining outstanding performance and reliability that supersede many of the currently used sources. Part One of this two part series sheds some light on the basics of LEDs physical structure, colors, efficiency, applications, and drivers.

Research paper thumbnail of Hard Disk Drive and Disk Encryption

A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage device used for st... more A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage device used for storing and retrieving digital information using one or more rigid ("hard") rapidly rotating disks (platters) coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. [2] Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile memory, retaining stored data even when powered off. Introduced by IBM in 1956, [3] HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDD units, though most current units are manufactured by Seagate, Toshiba and Western Digital. As of 2015, HDD production (exabytes per year) and areal density are growing, although unit shipments are declining. The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte (TB) drive has a capacity of 1,000 gigabytes (GB; where 1 gigabyte = 1 billion bytes). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Performance is specified by the time required to move the heads to a track or cylinder (average access time) plus the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate). The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial attached SCSI) cables. As of 2016, the primary competing technology for secondary storage is flash memory in the form of solid-state drives (SSDs), which have higher data transfer rates, better reliability, [4] and significantly lower latency and access times, but HDDs remain the dominant medium for secondary storage due to advantages in price per bit. [5][6] However, SSDs are replacing HDDs where speed, power consumption and durability are more important considerations. [7][8] Hybrid drive products have been available since 2007. [9] These are a combination of HDD and SSD technology in a single device, also known by the initialism SSHD.

Research paper thumbnail of Dynamic Memory Allocation and Fragmentation

In C and C++, it can be very convenient to allocate and de-allocate blocks of memory as and when ... more In C and C++, it can be very convenient to allocate and de-allocate blocks of memory as and when needed. This is certainly standard practice in both languages and almost unavoidable in C++. However, the handling of such dynamic memory can be problematic and inefficient. For desktop applications, where memory is freely available, these difficulties can be ignored. For real-time embedded systems, ignoring the issues is not an option.
Dynamic memory allocation tends to be non-deterministic; the time taken to allocate memory may not be predictable and the memory pool may become fragmented, resulting in unexpected allocation failures. In this paper the problems will be outlined in detail. Facilities in the Nucleus RTOS for handling dynamic memory are outlined and an approach to deterministic dynamic memory allocation detailed.

Research paper thumbnail of Disruptive Technologies and the Society

The pre-technological period, in which all other animal species remain today was a non-rational p... more The pre-technological period, in which all other animal species remain today was a non-rational period of the early prehistoric man. The emergence of technology, made possible by the development of the rational faculty, paved the way for the first stage: the tool. A tool provides a mechanical advantage in accomplishing a physical task, arrow, plow, or hammer that augments physical labor to more efficiently achieve this objective. Later animal-powered tools such as the plow and the horse, increased the productivity of food production about tenfold over the technology of the hunter-gatherers. Tools allow one to do things impossible to accomplish with one's body alone, such as seeing minute visual detail with a microscope, manipulating heavy objects with a pulley and cart or carrying volumes of water in a bucket. The second technological stage was the creation of the machine. A machine is a tool that substitutes the element of human physical effort, and requires only to control its function. Machines became widespread with the industrial revolution. Examples of this include cars, trains, computers and lights. Machines allow humans to tremendously exceed the limitations of their bodies. Putting a machine on the farm, a tractor, increased food productivity at least tenfold over the technology of the plow and the horse. The third, and final stage of technological evolution is the automation. The automation is a machine that removes the element of human control with an automatic algorithm. Examples of machines that exhibit this characteristic are digital watches, automatic telephone switches, pacemakers and computer programs.

Research paper thumbnail of Design of an Open-Source SATA Core

Serial ATA is a peripheral interface created in 2003 to replace Parallel ATA, also known as IDE. ... more Serial ATA is a peripheral interface created in 2003 to replace Parallel ATA, also
known as IDE. Hard drive speeds were getting faster, and would soon outpace the
capabilities of the older standard—the fastest PATA speed achieved was 133MB/s, while
SATA began at 150MB/s and was designed with future performance in mind [2]. Also,
newer silicon technologies used lower voltages than PATA's 5V minimum. The ribbon
cables used for PATA were also a problem; they were wide and blocked air flow, had a
short maximum length restriction, and required many pins and signal lines [2].
SATA has a number of features that make it superior to Parallel ATA. The
signaling voltages are low and the cables and connectors are very small. SATA has
outpaced hard drive performance, so the interface is not a bottleneck in a system. It also
has a number of new features, including hot-plug support.
SATA is a point-to-point architecture, where each SATA link contains only two
devices: a SATA host (typically a computer) and the storage device. If a system requires
multiple storage devices, each SATA link is maintained separately. This simplifies the
protocol and allows each storage device to utilize the full capabilities of the bus
simultaneously, unlike in the PATA architecture where the bus is shared.
To ease the transition to the new standard, SATA maintains backward
compatibility with PATA. To do this, the Host Bus Adapter (HBA) maintains a set of
shadow registers that mimic the registers used by PATA. The disk also maintains a set of
these registers. When a register value is changed, the register set is sent across the serial
line to keep both sets of registers synchronized. This allows for the software drivers to be
agnostic about the interface being used.

Research paper thumbnail of Computer Security and Mobile Security Challenges

Computer security, also known as cybersecurity or IT security, is the protection of information s... more Computer security, also known as cybersecurity or IT security, is the protection of information systems from theft or damage to the hardware, the software, and to the information on them, as well as from disruption or misdirection of the services they provide.[1] It includes controlling physical access to the hardware, as well as protecting against harm that may come via network access, data and code injection,[2] and due to malpractice by operators, whether intentional, accidental, or due to them being tricked into deviating from secure procedures.[3]
The field is of growing importance due to the increasing reliance on computer systems in most societies.[4] Computer systems now include a very wide variety of "smart" devices, including smartphones, televisions and tiny devices as part of the Internet of Things – and networks include not only the Internet and private data networks, but also Bluetooth, Wi-Fi and other wireless networks.

Research paper thumbnail of Computer Busses, Ports and Peripheral Devices

In computer architecture, a bus (related to the Latin "omnibus", meaning "for all") is a communic... more In computer architecture, a bus (related to the Latin "omnibus", meaning "for all") is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols. Early computer buses were parallel electrical wires with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in case of USB.

Research paper thumbnail of Channel Multiplexing, Bandwidth, Data Rate and Capacity

This article gives a brief overview of channel multiplexing techniques like FDM, TDM etc. and as ... more This article gives a brief overview of channel multiplexing techniques like FDM, TDM etc. and as to how they are used in computer communication. Channel multiplexing is the process of splitting or sharing the capacity of a high speed channel/telecommunication link to form multiple low capacity/low speed sub-channels. Each such sub-channel can then be used by multiple end nodes as dedicated links. Multiplexing can usually be done in different domains like time, frequency and space (and even combinations of these). Channel Multiplexing For computer communication, though multiplexing techniques like TDM, FDM were initially used mainly in backbone links connecting multiple data exchanges, later they have percolated widely into the access/last mile links too, including inside home networks. Time Division Multiplexing (TDM) In TDM, a high speed data channel/link is made to carry data of multiple connections/end nodes in different time slots, in a round robin fashion. TDM is similar in concept to multitasking computers, where the main processor carries out multiple tasks simultaneously. In multitasking processors, though the processor executes only one task at any instant of time and keeps shuttling between multiple tasks in some order, because of the high speed in which it executes, each task thinks as though the processor is dedicated only to it. Similarly, in TDM, data of each connection is segmented into smaller units, so that they fit inside mini time slots. The link transmits these small units of data from multiple connections in a round robin fashion, periodically allotting a mini time slot for each user, in the time domain. In TDM, the basic repeating unit is a frame. A TDM frame consists of a fixed number of time slots. Each time slot inside a frame carries data belonging to a specific end node/connection. Thus multiple logical sub-channels/links are created inside a single channel. It is also possible to give multiple slots within a frame to the same user, thereby having the provision of having different capacity sub-channels within the same link. Assuming that there are " n " end users, each requiring a link with a capacity of X Kbps, then to successfully multiplex these each end users on a channel, the channel's capacity needs to be at least equal to n times X Kbps. The Figure given below illustrates a sample TDM scheme with 4 users being served in a round robin fashion in the time domain.

Research paper thumbnail of Booting an Intel System Architecture

When external power is first applied to a platform, the hardware platform must carry out a number... more When external power is first applied to a platform, the hardware platform must carry out a number of tasks before the processor can be brought out of reset. The first task is for the power supply to be allowed to settle down to its nominal state; once the primary power supply settles, there are usually a number of derived voltage levels needed on the platform. For example, on the Intel architecture reference platform the input supply consists of a 12-volt source, but the platform and processor require a number of different voltage rails such as 1.5 V, 3.3 V, 5 V, and 12 V. The platform and processor also require that the voltages are provided in a particular sequence. This process is known as power sequencing. The power is sequenced by controlling analog switches (typically field effect transistors). The sequence is driven by often driven by a complex program logic device (CPLD). The platform clocks are also derived from a small number of input clock and oscillator sources. The devices use phase locked loop circuitry to generate the derived clocks used for the platform. These clocks also take time to converge. When all these steps have occurred, the power sequencing CPLD de-asserts the reset line to the processor. Figure 1 shows an overview of the platform blocks described. Depending on integration of silicon features, some of this logic may be on chip and controlled by microcontroller firmware, which starts prior to the main processor.

Research paper thumbnail of Optimal Centralized Dynamic-Time-Division-Duplex

IEEE Transactions on Wireless Communications, 2021

Research paper thumbnail of Robust Resource Allocation for MIMO Wireless Powered Communication Networks Based on a Non-linear EH Model

IEEE Transactions on Communications, 2017

Research paper thumbnail of DC Power Supplies, Applications and Measurements

Power management plays a major role in virtually every electronic system because it controls, reg... more Power management plays a major role in virtually every electronic system because it controls, regulates, and distributes DC power throughout the system. Therefore, the DC power management subsystem can affect the reliability, performance, cost, and time-to-market of the associated electronic equipment.
Power management subsystems enable an electronic system to function properly by supplying and controlling its DC power. An analogy is that a power management subsystem functions in a manner similar to the body’s blood vessels that supply the proper nutrients to keep the body alive. Likewise, the power management subsystem supplies and controls the power that keeps an electronic system alive.

Research paper thumbnail of Semiconductor Equipment Safety Standards

Technical operations such as those performed in semiconductor and photovoltaic device fabrication... more Technical operations such as those performed in semiconductor and photovoltaic device fabrication have inherent risks. For example, the equipment employed in these industries may operate at high temperatures, under vacuum conditions and can employ high electrical voltage/ currents and/or extremely hazardous, often pyrophoric and frequently corrosive chemicals. The failure of a critical system component with any of these system characteristics can produce unsafe conditions that can lead to severe injury or death of the operators, not to mention catastrophic damage to costly equipment. Other industrial settings have similar or greater levels of risk. Operational safety is thus of paramount importance in industrial semiconductor and other chemical processing systems. Because of this, a great deal of effort has been expended over the past decades into efforts to establish reliable metrics for the prediction of safe operational conditions for process equipment and to implement designs that meet the acceptable risk level without compromising costs and programmability.

Research paper thumbnail of AC Power Distribution Systems and Standards

The best distribution system is one that will, cost-effectively and safely, supply adequate elect... more The best distribution system is one that will, cost-effectively and safely, supply adequate electric service to both present and future probable loads. The function of the electric power distribution system in a building or an installation site is to receive power at one or more supply points and to deliver it to the individual lamps, motors and all other electrically operated devices. The importance of the distribution system to the function of a building makes it almost imperative that the best system be designed and installed.
In order to design the best distribution system, the system design engineer must have information concerning the loads and a knowledge of the various types of distribution systems that are applicable. The various categories of buildings have many specific design challenges, but certain basic principles are common to all. Such principles, if followed, will provide a soundly executed design.

Research paper thumbnail of Lasers and Laser Applications

A laser is a device that emits light through a process of optical amplification based on the stim... more A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term "laser" originated as an acronym for "light amplification by stimulated emission of radiation". The first laser was built in 1960 by Theodore H. Maiman at Hughes Research Laboratories, based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow. A laser differs from other sources of light in that it emits light coherently. Spatial coherence allows a laser to be focused to a tight spot, enabling applications such as laser cutting and lithography. Spatial coherence also allows a laser beam to stay narrow over great distances (collimation), enabling applications such as laser pointers. Lasers can also have high temporal coherence, which allows them to emit light with a very narrow spectrum, i.e., they can emit a single color of light. Temporal coherence can be used to produce pulses of light as short as a femtosecond. Among their many applications, lasers are used in optical disk drives, laser printers, and barcode scanners; DNA sequencing instruments, fiber-optic and free-space optical communication; laser surgery and skin treatments; cutting and welding materials; military and law enforcement devices for marking targets and measuring range and speed; and laser lighting displays in entertainment.

Research paper thumbnail of Multirotor Aircraft Dynamics, Simulation and Control

A quadrotor helicopter (quadcopter) is a helicopter which has four equally spaced rotors, usually... more A quadrotor helicopter (quadcopter) is a helicopter which has four equally spaced rotors, usually arranged at the corners of a square body. With four independent rotors, the need for a swashplate mechanism is alleviated. The swashplate mechanism was needed to allow the helicopter to utilize more degrees of freedom, but the same level of control can be obtained by adding two more rotors. The development of quadcopters has stalled until very recently, because controlling four independent rotors has proven to be incredibly difficult and impossible without electronic assistance.
The decreasing cost of modern microprocessors has made electronic and even completely autonomous control of quadcopters feasible for commercial, military, and even hobbyist purposes.
Quadcopter control is a fundamentally difficult and interesting problem. With six degrees of freedom (three translational and three rotational) and only four independent inputs (rotor speeds), quadcopters are severely underactuated. In order to achieve six degrees of freedom, rotational and translational motion are coupled. The resulting dynamics are highly nonlinear, especially after accounting for the complicated aerodynamic effects. Finally, unlike ground vehicles, helicopters have very little friction to prevent their motion, so they must provide their own damping in order to stop moving and remain stable. Together, these factors create a very interesting control problem. We will present a very simplified model of quadcopter dynamics and design controllers for our dynamics to follow a designated trajectory.

Research paper thumbnail of Semiconductor Equipment Companies Organizational and Management Analysis

One of the most important aspects of any organization is to match the skill sets of the employees... more One of the most important aspects of any organization is to match the skill sets of the employees and their assignments to meet the corporate goals and objectives. Personally, I prefer to build an organization based on people skills and strengths. Some organizations have taken seriously the issue of matching people to jobs, primarily due to the high cost of replacing employees. Companies have adopted behavioral interviewing techniques to make sure they are getting the right people. They have seen a noticeable change in the attitudes, work product and morale among staff since moving in this direction. Talent management is one of the biggest issues facing organizations today. Job descriptions are typically stale and dated and don't reflect the reality of what the companies need today in the way of top talent. As a result, they recruit or allow people to stay in jobs with skills based on what the job required years ago. This reality does not cause companies to find or train the type of employee they need to survive in today's competitive world. Business should be more focused on determining each person's skill identity and then making sure that the strengths are matched with the appropriate tasks. 1.1 Do we have the right skill sets Engineers are always taking an additional training to develop a broad knowledge and skill sets. We have a constantly changing requirements based on the dynamics of the semiconductor equipment technology. Within the organization we hire new people with new skills and train the existing ones to reduce the risk of irreplaceable skills. I believe all companies have great engineering potential with the right skill sets and continue to hire a new talent outside. 1.2 Do we have enough people Departments, divisions and business units have grown quite considerably over the last few years, and they still are. This does not include contractors, temporary employees and consultants added to the teams over the same time frame. We will always be perceived understaffed and overloaded. We have enough qualified people to handle a dynamically changing project load. It is a matter of using the resources more effectively. Priority will play a bigger role when projects volume exceeds the headcount trained to handle them.

Research paper thumbnail of New System Architecture

Research paper thumbnail of MW Power Delivery Systems

Research paper thumbnail of RF Power Delivery System

Research paper thumbnail of Outsourcing vs. Insourcing Strategy and Decision Making Process

Outsourcing vs. Insourcing Strategy and Decision Making Process

Research paper thumbnail of Classical Management Structure and Todays Management Methods

Research paper thumbnail of Methods and Devices for CMP Retaining Ring Vibration Detection

Mechanical systems often produce a considerable amount of vibration and noise. To be able to obta... more Mechanical systems often produce a considerable amount of vibration and noise. To be able to obtain a complete picture of the dynamic behavior of these systems, vibration and sound measurements are of significant importance. Optical metrology is well-suited for non-intrusive measurements on complex objects. The development and the use of remote non-contact vibration measurement methods for spindles are described and vibration measurements on thin-walled structures and sound field measurements are made. The idea is to apply the measurement method in rotating machines, where near-field acoustic measurements may provide additional information about a rotating machine part. The measurement methods that are developed and used provide increased understanding of the dynamics of complex structures such as thin-walled or rotating spindles. This may be utilized in the optimization of the machines currently available and in the development of machine parts.

Research paper thumbnail of CLTC-PID and k-factor adjustments

Research paper thumbnail of Close Loop Control Fundamentals

Close Loop control is in the fabric of modern day automation and embedded system control.

Research paper thumbnail of Time Management and Personal Organization

The management of one's time, and the concurrent management of others' time, has played a major r... more The management of one's time, and the concurrent
management of others' time, has played a major role in the ability to undertake simultaneously a wide variety of vocational, avocational, and volunteer opportunities. It draws on experience in both the public and the private sector, where the need to interact with colleagues and "outsiders" within both personal and professional time constraints is vital for both career success and the maintenance of a satisfying personal and family life.

Research paper thumbnail of The Evolution of Making Semiconductors

Overview Semiconductor manufacturing is a multibillion-dollar business with hundreds of suppliers... more Overview Semiconductor manufacturing is a multibillion-dollar business with hundreds of suppliers, large and small, playing their part. From raw silicon in one end to finished product out the other, every chip passes through a dozen different corporate hands. Nothing is small scale and even the smallest niches in the supply chain are multimillion-dollar markets employing thousands of people. Some segments of the industry are labor intensive and have been gradually moving from country to country as taxes, wage rates, and educational levels change and shift. Others are capital intensive and tend to stay centered in industrialized countries. Copyrights, patents, and intellectual property laws also affect what business is best carried out there. Semiconductor Food Chain Like any big industry, the business of designing, making, and selling semiconductors has several steps and a lot of middlemen. The big all-under-one-roof manufacturers like Intel are fairly rare. Most of the world's big chip makers use outside contractors for some of their business, whereas some small (and not-so-small) chip companies don't manufacture anything at all. Below is a simplified illustration of how the entire semiconductor "food chain" works for a number of representative companies. The diagram flows from left to right and includes some of the interrelated players. It doesn't include the chip-design phase, At this point, we're assuming that the chip design is finished and ready to go. Chips can be manufactured in-house, by a fab partner, or by an independent foundry. In any case, the owner of the fab has to buy equipment and chemicals on the open market, and the final product will be sold through independent sales channels.

Research paper thumbnail of Programmable Motion Control Fundamentals

In the early days of machine development, the control of position and velocity was accomplished b... more In the early days of machine development, the control of position and velocity was accomplished by elaborate, expensive, and time consuming solutions such as a series of cams, gears, shuttles, and the like. Frequently, other devices such as hydraulic and pneumatic cylinders, electric solenoids, plungers, and grippers were added to these systems. Some examples of these solutions include early textile machinery, coil making, and wire winding equipment. The automotive and machine tool industries were among those who saw the control of motion as a means of providing complex shapes and integrating complex operations. Being able to move heavy materials and process them in a repeatable and continuous manner added value and increased the productivity of their operations. While this was of great benefit in operations which were continually repeatable and injected no changes, this was not an optimum solution for operations which required short runs of parts for any degree of variety or customization. This was, of course, because early automated systems were highly dedicated and required laborious retooling and setup when even marginally different products or processes were required. With the emergence of computers and microprocessor technology, other options became possible. In electronically based systems one may choose a variety of different parameters by merely changing the software within the system. This translates into less setup work and more throughput. For example, to change the speed of an operation, a mechanical system might require you to exchange an existing gear with a larger or smaller one. In the modern world of programmable motion control, this could be accomplished by entering a few lines of code or selecting a different velocity profile from the system's memory. This is what we refer to as programmable motion control. PMC Defined Programmable Motion Control (PMC) is defined as the application of programmable hardware and software (in conjunction with input sensory devices, actuators, and other feedback devices) for the control of one or more linear or rotary motions. Expanding on this definition in today's concepts for the equipment used to control motion, a programmable motion controller commonly takes the form of a microprocessor based system. The system will be comprised of the following basic elements: controller, amplifier, actuator, feedback. A simplified block diagram of a programmable motion control system appears below. The controller will include a means of entering a set of instructions or code into its memory which are then translated into a series of electrical pulses or analog signals and output to an amplifier for controlling some type of actuator. The amplifier receives the signals from the controller and boosts or amplifies them to appropriate levels for the actuator.

Research paper thumbnail of Digital Signal Processing using ADC and DAC

Most of the signals directly encountered in science and engineering are continuous: light intensi... more Most of the signals directly encountered in science and engineering are continuous: light intensity that changes with distance; voltage that varies over time; a chemical reaction rate that depends on temperature, etc. Analog-to-Digital Conversion (ADC) and Digital-to-Analog Conversion (DAC) are the processes that allow digital computers to interact with these everyday signals. Digital information is different from its continuous counterpart in two important respects: it is sampled, and it is quantized. Both of these restrict how much information a digital signal can contain. This article is about information management: understanding what information you need to retain, and what information you can afford to lose. In turn, this dictates the selection of the sampling frequency, number of bits, and type of analog filtering needed for converting between the analog and digital realms.

Research paper thumbnail of All my books with the published articles on this site