Refresh Interval

MCSE 70-293: Planning, Implementing, and Maintaining a Name Resolution Strategy

Martin Grasdal , ... Dr. Thomas W. Shinder Technical Editor , in MCSE (Exam seventy-293) Study Guide, 2003

Crumbling and Scavenging of DNS Records

When you enable zones for dynamic updates, information technology is possible that the zone data files will larn a large number of superfluous and outdated records that might have a negative effect on DNS performance. For example, if yous retire a user's workstation and disconnect it from the network, the RRs for that calculator might remain in the DNS data. To help ensure the integrity and currency of DNS information, you can enable aging and scavenging of outdated DNS records. (Past default, the aging and scavenging option is non enabled.)

Aging and scavenging can be set on a per-zone or per-DNS server basis. Per-zone settings override per-DNS server settings. Figure 6.xiv shows the server-wide crumbling and scavenging belongings page.

Figure 6.fourteen. Aging and Scavenging Settings for a DNS Server

The No-refresh interval setting is the amount of time that must elapse before a DNS client or DHCP server can refresh a timestamp for a record. When a DNS client creates a record, it is assigned a timestamp. The DNS client attempts to refresh this record every 24 hours. Unless the record is changed (for example, the client receives a new IP address), the timestamp cannot be refreshed for a default menstruation of vii days. After the vii days have elapsed, the DNS client can refresh the timestamp, which starts the timer on the no-refresh interval for the record. If the tape is not refreshed in the seven-24-hour interval menstruum, it can be scavenged. When the record is scavenged, however, depends on another setting, the Scavenging period. This setting is enabled and configured on the Advanced tab of the belongings pages for the DNS server. To enable scavenging, y'all must enable this setting, as well as the settings for No-refresh interval and Refresh interval.

EXAM Alarm

DDNS and its interaction with DHCP are of import concepts. You should be thoroughly familiar with the implementation of DDNS and DHCP to support dynamic updates to DNS zones. Your agreement of these concepts should also be informed by a thorough understanding of the security implications for enabling DDNS.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781931836937500105

Configuring DNS

Tony Piltzecker , Brien Posey , in The All-time Damn Windows Server 2008 Volume Period (Second Edition), 2008

Configuring DDNS Aging and Scavenging

Administrators are responsible for defining when a record should be considered invalid and ready for deletion. Aging settings are used to determine when a record should be removed, and the scavenging process actually deletes it. There are two levels at which aging can exist set, the server and the zone. Settings applied at the server level will apply to all AD integrated zones on the DNS server. Settings applied at the zone level override server level settings for AD integrated zones. You practice not take to configure zone level settings for AD integrated zones. If you are using standard primary zones, yet, you do accept to configure crumbling at the zone level.

When a host or DHCP service registers a record dynamically with DNS, the record receives a timestamp. This timestamp is the foundation for the aging and scavenging process. Once established, records are updated using one of two methods. The first, a record refresh, is performed when a host checks in and lets the DNS server know that nothing has inverse and the record is still valid. Most Windows 2000 and later hosts ship a refresh every 24 hours. Because the time postage is updated when a refresh occurs, Advertisement replication (for AD integrated zones) or zone transfers (for standard zones) are triggered. To limit the amount of traffic consumed by DDNS, Microsoft allows administrators to configure a no-refresh interval (7 days past default). During this time the DNS server will decline refresh requests for the record.

The 2d type of communication that hosts and DHCP servers utilize to dynamically modify DNS records is the record update method. This method is used when a new host joins the network and A (or AAAA) and PTR records are created for it, when a server is promoted to become a domain controller, or when an existing record requires an IP address update. DNS tape changes involving the record update method can occur at any time and are not subject field to the limits imposed past the no-refresh interval. To configure the refresh intervals at either the server or zone level, follow these steps:

1

Open DNS Managing director by clicking Get-go | Administrative Tools | DNS

2

Select one of the following:

To manage server level aging and scavenging, in the left pane right-click the server node and select Set Aging/Scavenging for All Zones…

To manage zone level aging and scavenging, in the left pane expand the server, expand either the Forward Lookup Zones or Reverse Lookup Zones node, right-click the zone yous desire to configure, and click Properties. On the General tab, click the Aging button.

3

In the Crumbling/Scavenging Properties dialog that appears, select the Scavenge stale resources records box. Meet Effigy 5.46.

Figure five.46. The Server Aging/Scavenging Properties Dialog

4

Configure the following options:

No-refresh interval. This setting controls when the DNS server rejects refresh requests from hosts and the DHCP service. Almost Windows hosts attempt to refresh their records every 24 hours. The DHCP service attempts updates at fifty% of the IP address lease time. This selection is used to limit the amount of replication traffic required for records that do not change. The default of seven days is acceptable for most networks.

Refresh. This selection determines when a DDNS tape tin be flagged for scavenging (deletion). The default value is 7 days. By default, records that are older than the sum of the no-refresh and refresh intervals will be available for scavenging. This value must be set to a value that is less than the maximum setting for clients to refresh their records. The default is adequate for most networks; however if you modify your DHCP addresses leases to longer than 14 days, you may desire to consider updating this setting to 50% of the configured lease fourth dimension.

5

Click OK

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597492737000057

Support Protocols

Thomas Porter , Michael Gough , in How to Cheat at VoIP Security, 2007

RSVP Protocol

The RSVP protocol works by transferring UDP packets from the recipient of the data transfer to its sender. This allows the data recipient to control whether to use regular TCP/IP or to utilize a dedicated path of travel between the two clients. The connection recipient initiates this path by sending a constructed RSVP packet to the connection initiator. This packet will contain a specific Bulletin Type that indicates the activity that should be acted upon. The mutual Message Types for an RSVP protocol are

Path

Resv (Reservation Request)

PathErr (Path Mistake)

ResvErr (Reservation Error)

PathTear (Path Teardown)

ResvTear (Reservation Teardown)

ResvConf (Reservation Confirmation)

The RSVP packet also carries a information payload containing specific information on how the path should be constructed. The payload contains information such equally:

Session (Destination IR Tunnel ID, Extended Tunnel ID)

Hop (the neighboring router'due south IP)

Fourth dimension Values (the refresh interval)

Explicit Road (a list of routers betwixt the 2 devices that creates the information path)

Adspec (specifies the minimum path latency, MTU, and bandwidth requirements)

RSVP Performance

To create a defended path of travel, the RSVP protocol relies heavily on its Path and Resv messages. The Path bulletin packet is used to define the path of routers to be used for communication betwixt the two clients. This packet is sent from the receiving end of the communication towards the sender. As it passes through each individual router, the router examines the package to determine its neighboring IP addresses, to which it must road packets to. The Resv message, or Reservation request, is as important. The Resv message is sent from each router to its neighboring router, one hop at a time. The Resv packet helps create the reservation on each router involved in the path. The transfer of Path and Resv packets is detailed in Figure 4.vii.

Effigy 4.seven. Creating an RSVP Path

One time a path has been created, with each router maintaining a reservation for the data, information technology must be updated routinely to remain open. If a router has non received a Resv and Path packet earlier the refresh interval on the path has been exhausted, and then the router will remove the reservation from itself. As Resv and Path packets get in to maintain the reservation, they may also make changes to it. If the path between the clients is to alter to substitute routers, the recipient only sends a new Path bulletin with the updated path and it will become effective. Each router will continually update its stored information based on the packets it continually receives during the transmission.

In one case the communication between the two devices has ended, they initiate a teardown of the path. Although, realistically they could just cease transmitting RSVP packets and eventually the reservations on the routers would elapse, it is recommended that they formally tear downwards the path immediately after finishing the connection. The teardown may be initiated past either side of the communication, or from whatsoever of the routers within the communication. A Path Tear parcel may be sent downstream from the sender, or a ResvTear may exist sent upstream from the receiver. As each router in the path receives a teardown bundle, they will immediately remove the path reservation and forward the packet onto the next hop in the path.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9781597491693500050

The Basics of Managing Exchange 2007

Tony Redmond , in Microsoft Substitution Server 2007 with SP1, 2008

three.ix.1 Setting mailbox quotas

Substitution has a structured flow to ready and respect mailbox quotas. When you create a new mailbox, it automatically inherits the default storage limits set for the database that hosts the mailbox and as you tin can see in Figure 3-46, the default limits are quite high. Substitution 2007 is the first version that sets default limits as previous versions of Exchange leave these values blank for newly created databases. Table 3-3 lists the default storage limits together with their meaning. The tabular array also lists the EMS beat out parameter that y'all use to recall information almost a user'south current quota consumption with the Become-Mailbox control and the Agile Directory property that stores the storage limits for the user object.

Effigy 3-46. How mailbox quotas are practical

Table 3-three. Default mailbox storage limits

Active Directory property Ems shell parameter Significant and default value
mDBStorageQuota IssueWarningQuota Issue a warning that you are close to the limit when the mailbox reaches ane.9GB.
mDBOverQuotaLimit ProhibitSendQuota Stop the user sending new messages when the mailbox reaches 2GB.
mDBOverHardQuotaLimit ProhibitSendReceiveQuota Finish the user sending and receiving letters when the mailbox reaches 2.3GB.
mDBUseDefaults UseDatabaseQuotaDefaults Flag to control whether mailbox uses default storage quotas set for the database. Default is "truthful."

Microsoft prepare the default storage limits by showtime with an assumption that users will have large 2GB mailboxes, which is aligned with their general idea that Exchange 2007 is amend able to support such big mailboxes and that the move toward unified messaging means that more than items volition cease up in mailboxes. This assertion is true, if you determine to deploy Exchange 2007 Unified Messaging. Still, voicemail messages do non by and large take up much actress space in a mailbox, so it is hard to have that mailbox quotas need to increment quite so much to accommodate voicemail.

After settling on a default mailbox size of 2GB, Microsoft then calculated the bespeak to start issuing warnings by assuming that an boilerplate user receives i hundred 50KB messages a day, or nigh 5MB daily and 100MB over xx days. The gap betwixt alarm starting at 1.9GB and a user being unable to transport new messages at 2GB is sufficient for fifty-fifty the tardiest user to take annotation of the warnings and clean up their mailbox. They and then decided to leave a gap of 0.3GB before the next brake kicks in and the user is unable to receive new mail to accommodate the state of affairs when users accept extended vacations and are unable to clean out their mailbox.

You lot may decide that Microsoft'south calculations are off and come upwards with your own default values for what limitations should be applied to mailboxes within the arrangement. In whatsoever result, in one case you take decided on quotas, yous should apply the values by policy before you start to move users to Exchange 2007 so that the new mailboxes inherit the desired quotas. For specific users, you tin set different quotas by editing mailbox properties and immigration the "Utilize mailbox database defaults" checkbox and setting new values for warning, prohibit send, and prohibit send and receive. If you desire to create a mailbox that has a truly unlimited quota, and then clear the "Use mailbox database defaults" checkbox and set no values for warning, prohibit send, and prohibit transport and receive.

When a user exceeds their mailbox quota, Exchange logs event 8528 in the application issue log (Figure 3-47). Users also see a notification about their quota problem the next fourth dimension they connect to Commutation with Outlook or the premium edition of Outlook Spider web Access 2007 (Figure 3-48). The lite edition of Outlook Web Access, IMAP and POP clients will not observe a problem with mailbox quota until a user attempts to create and ship a new message.

Figure 3-47. Effect logging mailbox quota exceeded

Figure three-48. Outlook Web Access flags a quota problem

Autonomously from scanning the consequence log regularly to detect users that exceed their quotas, y'all can take a more proactive approach by checking with a PowerShell command. Nosotros will get to the details of how to use PowerShell with Exchange 2007 in some detail in Chapter iv, simply for now, you can apply a control like this to bank check for problem mailboxes on a server:

This one-line PowerShell command scans the ExchMbxSvr1 server to look for disabled mailboxes because they have exceeded quota. We can do a lot more with PowerShell to help us identify mailboxes that need some authoritative intervention, simply we volition leave the details of how to create more than powerful scripts until Chapter iv.

Commutation stores persistent user quota information in the Active Directory. To avoid the demand to read the Active Directory to check a quota every time a user interacts with their mailbox, the Store maintains a cache of quota information in retentiveness. The DSAccess component maintains the enshroud by reading quota data from the Active Directory every two hours, which means that information technology can take the Store upwards to two hours to respect a new quota value that y'all assign to a mailbox. You can alter the lifetime of the cache by changing the registry values that command how DSAccess operates and how the Store reads the information. For case, to change the refresh interval to one hr, you:

ane.

Open the registry editor

two.

Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem and create a new DWORD value called Reread Logon Quotas Interval and gear up information technology to 3600 (the value is in seconds, so 3600 seconds = 1 hour, the default is 7200). This value controls the interval that the Shop reads the quota data from the cache.

3.

Create a new DWORD value called Mailbox Cache Age Limit and gear up it to 60 (the value is in minutes and the default is 120). This value sets an age limit for the cached information about mailbox data (including quotas).

4.

You at present have updated Exchange so that information technology looks for information from the enshroud more than often. You may also want to update the registry value that controls how oftentimes the DSAccess component reads information from the Active Directory to populate its cache to ensure that changes made to the Agile Directory are picked upward sooner. The default interval is v minutes, which is normally sufficient, simply you tin reduce the value by navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services \MSExchange ADAccess\Instance0 and creating a new DWORD value called CacheTTLUser. The default value is 300 seconds, or five minutes. Prepare information technology to whatever value you desire, agreement that whatever modify will provoke more activity on the server as DSAccess communicates more than often with a Global Itemize server to keep its cache refreshed.

5.

The changes are effective the next time that you restart the Information Shop service.

Almost users who struggle to go along under quota volition probably admit that their Inbox and Sent Items folders are stuffed full of messages that they really practise not demand to retain. On Commutation 2007 servers, you can use managed folder policies to assistance users continue some control of these folders past expiring and deleting letters after a defined catamenia (for example, 60 days). Managed folder policies are discussed in Chapter eight.

Read full affiliate

URL:

https://www.sciencedirect.com/science/article/pii/B9781555583552500064

Structuring Applications for Performance

TOM McREYNOLDS , DAVID BLYTHE , in Advanced Graphics Programming Using OpenGL, 2005

21.3.3 Measuring Functioning

When benchmarking any awarding, there are mutual guidelines that when followed assistance ensure accurate results. The system used to measure functioning should exist idle, rather than executing competing activities that could steal system resource from the awarding being measured. A proficient system clock should exist used for measuring performance with sufficient resolution, low latency, and producing authentic, reproducible results. The measurements themselves should be repeated a number of times to average out atypical measurements. Any pregnant variation in measurements should be investigated and understood. Across these well-known practices, however, are performance techniques and concepts that are specific to computer graphics applications. Some of these fundamental ideas are described in the following, along with their relevance to OpenGL applications.

Video Refresh Quantization

A dynamic graphics awarding renders a series of frames in sequence, creating animated images. The more than frames rendered per second the smoother the motility appears. Smoothen, antiquity-gratuitous blitheness besides requires double buffering. In double buffering, one colour buffer holds the current frame, which is scanned out to the display device by video hardware, while the rendering hardware is drawing into a second buffer that is not visible. When the new colour buffer is set up to be displayed, the application requests that the buffers exist swapped. The swap is delayed until the side by side vertical retrace period between video frames, then that the update process isn't visible on the screen.

Frame rates must exist integral multiples of the screen refresh fourth dimension, 16.7 msec (milliseconds) for a threescore-Hz display. If the rendering fourth dimension for a frame is slightly longer than the time for n raster scans, the system waits until the n + onest video period (vertical retrace) before swapping buffers and allowing drawing to continue. This quantizes the total frame time to multiples of the display refresh rate. For a sixty-Hz refresh rate, frame times are quantized to (northward + one) * 16.vii msec. This ways fifty-fifty meaning improvements in performance may not be noticeable if the saving is less than that of a display refresh interval.

Quantization makes performance tuning more difficult. First, quantizing can mask most of the details of operation improvements. Performance gains are oft the sum of many small improvements, which are found by making changes to the program and measuring the results. Quantizing may hide those results, making it incommunicable to discover plan changes that are having a pocket-sized but positive effect. Quantizing also establishes a minimum barrier to making functioning gains that will exist visible to the user. Imagine an application running fast enough to support a xl-fps refresh rate. Information technology volition never run faster than 30 fps on a 60-Hz display until it has been optimized to run at sixty fps, almost double its original rate. Table 21.2 lists the quantized frame times for multiples of a 60-Hz frame.

Tabular array 21.2. 60-Hz Charge per unit Quantization

Frame Multiple Rate (Hz) Interval (ms)
ane 60 16.67
2 thirty 33.33
3 20 fifty
iv 15 66.67
v 12 83.33
6 10 100
vii eight.6 116.67
8 seven.5 133.33
9 6.7 150
10 6 166.67

To accurately measure the results of performance changes, quantization should be turned off. This tin be done by rendering to a single-buffered colour buffer. Likewise making it possible to see the results of functioning changes, single-buffering operation also shows how close the application's update charge per unit is to a screen refresh purlieus. This is useful in determining how much more than improvement is necessary before it becomes visible in a double-buffered application. Double buffering is enabled again after all operation tuning has been completed.

Quantization tin sometimes be taken advantage of in application tuning. If an awarding'south single-buffered frame rate is not close to the adjacent multiple of a screen refresh interval, and if the current quantized rate is adequate, the application tin exist modified to practise additional work, improving visual quality without visibly changing performance. In essence, the time interval between frame completion and the next screen refresh is existence wasted, which it can be used instead to produce a richer image.

Finish Versus Flush

Modern hardware implementations of OpenGL oftentimes queue graphics commands to ameliorate bandwidth. Understanding and decision-making this process is important for accurate benchmarking and maximizing performance in interactive applications.

When an OpenGL implementation uses queuing, the pipeline is buffered. Incoming commands are accumulated into a buffer, where they may be stored for some menses of time before rendering. Some queuing pipelines employ the notion of a loftier water marking, deferring rendering until a given buffer has exceeded some threshold, so that commands can be rendered in a bandwidth-efficient way. Queuing tin can allow some parallelism between the organization and the graphics hardware: the application can fill the pipeline's buffer with new commands while the hardware is rendering previous commands. If the pipeline'southward buffer fills, the application can do other piece of work while the hardware renders its excess of commands.

The procedure of emptying the pipeline of its buffered commands is called flushing. In many applications, especially interactive ones, the awarding may not supply commands in a steady stream. In these cases, the pipeline buffer can stay partially filled for long flow of fourth dimension, not rendering any more commands to hardware even if the graphics hardware is completely idle.

This situation can exist a trouble for interactive applications. If some graphics commands are left in the buffer when the application stops rendering to expect for input, the user volition see an incomplete epitome. The application needs a manner of indicating that the buffer should exist emptied, even if information technology isn't total. OpenGL provides the command glFlush to perform this operation. The command is asynchronous, returning immediately. It guarantees that outstanding buffers will be flushed and the pipeline will complete rendering, but doesn't provide a way to indicate exactly when the flush will consummate.

The glFlush command is inadequate for graphics benchmarking, which needs to measure out the elapsing between the issuing of the outset control and the completion of the last. The glFinish control provides this functionality. It flushes the pipeline, only doesn't return until all commands sent before the finish have completed. The divergence betwixt finish and affluent is illustrated in Figure 21.4

Figure 21.4. Terminate versus flush.

To benchmark a piece of graphics lawmaking, phone call glFinish at the cease of the timing trial, just before sampling the clock for an finish time. The glFinish command should too be called before sampling the clock for the start time, to ensure no graphics calls remain in the graphics queue ahead of the commands being benchmarked.

While glFinish ensures that every previous control has been rendered, information technology should exist used with care. A glFinish telephone call disrupts the parallelism the pipeline buffer is designed to achieve. No more than commands tin be sent to the hardware until the glFinish control completes. The glFlush command is the preferred method of ensuring that the pipeline renders all pending commands, since it does and so without disrupting the pipeline's parallelism.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558606593500238

The InfoPad Multimedia Last: A Portable Device for Wireless Information Access

Thomas East. Truman , ... Swain, IEEE, in Readings in Hardware/Software Co-Design, 2002

4.1.2 Remote-I/O Processing Latency

A disquisitional metric of the usefulness of the remote I/O architecture is the circular-trip latency incurred every bit a packet moves through successive stages in the organisation. While the dominant source of latency is the network interfaces on the courage network, early measurements ([2], [3], [8] ) using standard workstations attached to a x Mbit/southward Ethernet courage demonstrate that a xxx millisecond round-trip latency was an doable blueprint constraint for a LAN-based courage. This goal is based on the graphics refresh interval and gives the user an imperceptible difference between local- and remote-I/O processing for the pen-based user interface. Given this constraint, it is useful to evaluate the processing latency introduced by the interface betwixt the IPbus peripherals and the wireless link. We break this latency into the following iii components:

Packet generation: iii microseconds. This is defined to be the time elapsed from when the terminal byte of available uplink data until the packet is reported prepare (i.e., a asking for scheduling is generated). The bus-mastering architecture of the IPbus provides a direct path from each data source to the wireless network interface (via the TX scrap buffers) without involving the processor. Thus, the parcel generation latency is typically less than three IPbus clock cycles.

Scheduling: 160 microseconds. This is the time required to process the scheduling request and clear the bundle for transmission. To facilitate experimentation with a diverseness of scheduling algorithms and media-access protocols, packetization and scheduling are separated. Division these functions into physically separate units increases the complexity of the packetizer past requiring it to support random admission to available packets. This partitioning also increases intermodule communication past requiring the packetizer to interact with the scheduler several times for each bundle, and each interaction requires several passenger vehicle transactions.

The current implementation, with an idle transmitter and an empty transmit queue, has a worst instance time on the society of 160 microseconds—50 microseconds to notify the processor, 10 microseconds for the processor to clear the packet for transmission, and 100 microseconds for the outset scrap of data (subsequently 64 bits of synchronization preamble) to be transmitted over the wireless link.

Parcel distribution: ane microsecond. This is defined every bit the time elapsed from the moment the commencement byte of bachelor downlink data is gear up until the commencement byte of the packet is sent to its destination device (e.yard., pen, audio, etc.). Since the architecture employs direct, unbuffered routing from source to destination, the package distribution latency is only the fourth dimension required to decide the hardware destination address for the given type, which can be accomplished in a single IPbus clock wheel.

The sum of these 3 components is 164 microseconds. This latency is insignificant compared to the latency incurred in the backbone network: 10-xx milliseconds on a standard 10 Mbit/sec LAN (well inside the upper jump 30 millisecond round-trip). Information technology is expected that land-of-the-art high-bandwidth networks that support QoS will exist able to calibration to back up the fifty users per cell envisioned.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9781558607026500636

DRAM Device Organization: Basic Circuits and Architecture

Bruce Jacob , ... David T. Wang , in Memory Systems, 2008

8.2 DRAM Storage Cells

Figure 8.ii shows the excursion diagram of a basic one-transistor, one-capacitor (1T1C) cell structure used in mod DRAM devices to shop a unmarried chip of data. In this structure, when the access transistor is turned on by applying a voltage on the gate of the admission transistor, a voltage representing the data value is placed onto the bitline and charges the storage capacitor. The storage capacitor then retains the stored accuse after the access transistor is turned off and the voltage on the wordline is removed. However, the electrical charge stored in the storage capacitor will gradually leak abroad with the passage of time. To ensure data integrity, the stored information value in the DRAM cell must be periodically read out and written dorsum past the DRAM device in a process known as refresh. In the following section, the relationships between cell capacitance, leakage, and the demand for refresh operations are briefly examined.

FIGURE eight.2. Basic 1T1C DRAM cell structure.

Different cell structures, such equally a three-transistor, one-capacitor (3T1C) cell structure in Effigy 8.iii with separate read access, write access, and storage transistors, were used in early on DRAM designs. three The 3T1C cell structure has an interesting characteristic in that reading data from the storage prison cell does not require the content of the prison cell to exist discharged onto a shared bitline. That is, data reads to DRAM cells are not destructive in 3T1C cells, and a simple read cycle does not require data to exist restored into the storage cell every bit they are in 1T1C cells. Consequently, random read cycles are faster for 3T1C cells than 1T1C cells. However, the size advantage of the 1T1C cell has ensured that this basic prison cell structure is used in all modern DRAM devices.

Effigy 8.3. 3T1C DRAM prison cell.

Bated from the basic 1T1C cell structure, research is ongoing to utilize alternative jail cell structures such as the use of a single transistor on a Silicon-on-Insulator (SOI) process equally the basic prison cell. In one proposed structure, the isolated substrate is used equally the charge storage element, and a separate storage capacitor is not needed. Similar to data read-out of the 3T1C cell, information read-out is non subversive, and information retrieval is washed via electric current sensing rather than charge sensing. However, despite the beingness of culling cell structures, the 1T1C cell structure is used equally the basic charge storage jail cell structure in all mod DRAM devices, and the focus in this affiliate is devoted to this dominant 1T1C DRAM jail cell structure.

viii.2.1 Cell Capacitance, Leakage, and Refresh

In a 90-nm DRAM-optimized process engineering, the capacitance of a DRAM storage cell is on the lodge of 30 fF, and the leakage current of the DRAM access transistor is on the order of one fA. With a cell capacitance of xxx fF and a leakage current of 1 fA, a typical DRAM prison cell tin can retain sufficient electrical charge that will proceed to resolve to the proper digital value for an extended menstruum of time—from hundreds of milliseconds to tens of seconds. However, transistor leakage characteristics are temperature-dependent, and DRAM cell data memory times can vary dramatically non simply from cell to cell at the same time and temperature, but too at different times for the same DRAM prison cell. 4 However, memory systems must exist designed so that not a single flake of information is lost due to charge leakage. Consequently, every single DRAM cell in a given device must be refreshed at least once before whatever single chip in the entire device loses its stored charge due to leakage. In most modernistic DRAM devices, the DRAM cells are typically refreshed once every 32 or 64 ms. In cases where DRAM cells have storage capacitors with low capacitance values or high leakage currents, the time catamenia betwixt refresh intervals is further reduced to ensure reliable data retention for all cells in the DRAM device.

8.2.2 Conflicting Requirements Drive Prison cell Construction

Since the invention of the 1T1C DRAM cell, the physical structure of the basic DRAM cell has undergone continuous evolution. DRAM cell structure evolution occurred as a response to the alien requirements of smaller cell sizes, lower voltages, and racket tolerances needed in each new procedure generation. Figure eight.4 shows an abstract implementation of the 1T1C DRAM jail cell construction. A storage capacitor is formed from a stacked (or folded plate) capacitor structure that sits in between the polysilicon layers above agile silicon. Alternatively, some DRAM device manufacturers instead utilize cells with trench capacitors that swoop deeply into the active silicon area. Modern DRAM devices typically use one of these two different forms of the capacitor structure as the basic charge storage element.

Effigy viii.4. Cross-section view of a 1T1C DRAM cell with a trench capacitor. The storage capacitor is formed from a trench capacitor construction that dives deeply into the active silicon area. Alternatively, some DRAM device manufacturers instead use cells with a stacked capacitor structure that sits in between the polysilicon layers above the agile silicon.

In contempo years, two competing camps have been formed betwixt manufacturers that apply a trench capacitor and manufacturers that use a stacked capacitor as the bones charge storage element. Debates are ongoing as to the relative costs and long-term scalability of each design. For manufacturers that seek to integrate DRAM cells with logic circuits on the same process technology, the trench capacitor structure allows for improve integration of embedded DRAM cells with logic-optimized semiconductor process technologies. Yet, manufacturers that focused on stand-alone DRAM devices announced to favor stacked capacitor cell structures as opposed to the trench capacitor structures. Currently, DRAM device manufacturers such as Micron, Samsung, Elpida, Hynix, and the bulk of the DRAM manufacturing industry use the stacked capacitor structure, while Qimonda, Nanya, and several other smaller DRAM manufacturers use the trench capacitor structure.

8.2.3 Trench Capacitor Structure

Currently, the overriding consideration for DRAM devices, in general, and commodity DRAM devices, in item, is that of price-minimization; This over-riding consideration leads straight to the pressure to reduce the cell size—either to increase selling toll past putting more DRAM cells onto the same piece of silicon real estate or to reduce cost for the same number of storage cells. The pressure to minimize jail cell area, in turn, means that the storage cell either has to grow into a three-dimensional stacked capacitor above the surface of the silicon or has to grow deeper into and trench below the surface of the agile silicon. Figure 8.iv shows a diagram of the 1T1C DRAM jail cell with a deep trench capacitor as the storage element. The abstruse illustration in Figure 8.4 shows the top cantankerous department of the trench capacitor. v The depth of the trench capacitor allows a DRAM cell to decrease the apply of the silicon surface expanse without decreasing storage cell capacitance. Trench capacitor structures and stacked capacitor structures have corresponding advantages and disadvantages. One reward of the trench capacitor blueprint is that the three-dimensional capacitor structure is under the interconnection layers and so that the higher level metal layers can be more easily made planar. The planar feature of the metal layers ways that the procedure could be more than easily integrated into a logic-optimized process technology, where there are more metal layers above the active silicon. The buried structure also means that the trench capacitor could exist synthetic before logic transistors are constructed. The importance of this subtle distinction means that processing steps to create the capacitive layer could be activated before logic transistors are made, and the performance characteristics of logic transistors would not be degraded past formation of the (high-temperature) capacitive layer. 6

8.two.4 Stacked Capacitor Structure

The stacked capacitor structure uses multiple layers of metal or conductive polysilicon in a higher place the surface of the silicon substrate to course the plates of the capacitor to form the plates of the capacitor that holds the stored electrical charge. Figure 8.five shows an abstract illustration of the stacked capacitor structures. The capacitor structure in Figure 8.v is formed between two layers of polysilicon, and the capacitor lies underneath the bitline. Information technology is referred to as the Capacitor-under-Bitline (CUB) structure. The stacked capacitive storage prison cell tin can likewise be formed higher up the bitline in the Capacitor-over-Bitline (COB) structure. Regardless of the location of the storage cell relative to the bitline, both the CUB and COB structures are variants of the stacked capacitor structure, and the capacitor resides in the polysilicon layers above the active silicon. The relentless pressure to reduce DRAM cell size while retaining jail cell capacitance has forced the capacitor construction to grow in the vertical dimension, and the evolution of the stacked capacitor structure is a natural migration from 2-dimensional plate capacitor structures to 3-dimensional capacitor structures.

FIGURE 8.5. Abstract view of a 1T1C DRAM cell with stacked capacitor.

Read full affiliate

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780123797513500102