10:49 Oct 31, 2019 |
Dutch to English translations [PRO] Tech/Engineering - IT (Information Technology) | |||||
---|---|---|---|---|---|
|
| ||||
| Selected response from: Marijke Singer Spain Local time: 10:38 | ||||
Grading comment
|
Summary of answers provided | ||||
---|---|---|---|---|
5 +1 | update, write-back, write back or writeback |
| ||
5 | Writeback |
| ||
4 | send back / transfer back, etc. |
|
Summary of reference entries provided | |||
---|---|---|---|
refs |
|
Discussion entries: 3 | |
---|---|
Writeback Explanation: As in a writeback cache. |
| ||||||||||||||||||||||||||||||
Notes to answerer
| |||||||||||||||||||||||||||||||
11 mins confidence: peer agreement (net): +1
7 hrs confidence:
|
1 hr |
Reference: refs Reference information: ‘DEFINITION write back Posted by: Margaret Rouse WhatIs.com Contributor(s): Stan Gibilisco Write back is a storage method in which data is written into the cache every time a change occurs, but is written into the corresponding location in main memory only at specified intervals or under certain conditions. When a data location is updated in write back mode, the data in cache is called fresh, and the corresponding data in main memory, which no longer matches the data in cache, is called stale. If a request for stale data in main memory arrives from another application program, the cache controller updates the data in main memory before the application accesses it. Write back optimizes the system speed because it takes less time to write data into cache alone, as compared with writing the same data into both cache and main memory. However, this speed comes with the risk of data loss in case of a crash or other adverse event. Write back is the preferred method of data storage in applications where occasional data loss events can be tolerated. In more critical applications such as banking and medical device control, an alternative method called write through practically eliminates the risk of data loss because every update gets written into both the main memory and the cache. In write through mode, the main memory data always stays fresh.’ (https://whatis.techtarget.com/definition/write-back ) -------------------------------------------------- Note added at 1 hr (2019-10-31 12:16:00 GMT) -------------------------------------------------- ‘Writing policies A write-through cache with no-write allocation A write-back cache with write allocation Main article: Cache coherence When a system writes data to cache, it must at some point write that data to the backing store as well. The timing of this write is controlled by what is known as the write policy. There are two basic writing approaches:[3] ▶ Write-through: write is done synchronously both to the cache and to the backing store. ▶Write-back (also called write-behind): initially, writing is done only to the cache. The write to the backing store is postponed until the modified content is about to be replaced by another cache block. A write-back cache is more complex to implement, since it needs to track which of its locations have been written over, and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, an effect referred to as a lazy write. For this reason, a read miss in a write-back cache (which requires a block to be replaced by another) will often require two memory accesses to service: one to write the replaced data from the cache back to the store, and then one to retrieve the needed data. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Since no data is returned to the requester on write operations, a decision needs to be made on write misses, whether or not data would be loaded into the cache. This is defined by these two approaches: Write allocate (also called fetch on write): data at the missed-write location is loaded to cache, followed by a write-hit operation. In this approach, write misses are similar to read misses. No-write allocate (also called write-no-allocate or write around): data at the missed-write location is not loaded to cache, and is written directly to the backing store. In this approach, data is loaded into the cache on read misses only. Both write-through and write-back policies can use either of these write-miss policies, but usually they are paired in this way:[4] A write-back cache uses write allocate, hoping for subsequent writes (or even reads) to the same location, which is now cached. A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store. Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols.’ (https://en.wikipedia.org/wiki/Cache_(computing) ) |
| |
Login to enter a peer comment (or grade) |
Login or register (free and only takes a few minutes) to participate in this question.
You will also have access to many other tools and opportunities designed for those who have language-related jobs (or are passionate about them). Participation is free and the site has a strict confidentiality policy.