英语人>网络例句>cache 相关的网络例句
cache相关的网络例句

查询词典 cache

与 cache 相关的网络例句 [注:此内容来源于网络,仅供参考]

CACHE there are three kinds of image and the way advantages and disadvantages and Cache memory in a few practical issues

还有CACHE的3种映像方式及优缺点和Cache存储器实用中的几个问题

In this paper, a CMP cache design (tradeoff cache between latency and capacity, TCLC) is proposed.

提出了一种适用于CMP的cache结构-延迟和容量权衡的cache结构。

In terms of memory hierarchy, we investigated how to improve the cache hit rate and localize data according to the characteristics of cache, master memory, local disk and remote disk in cluster systems. As for communication, features of cluster systems and different cluster internal networks are examined. Two popular parallel programming models - message passing model and share memory model - are analyzed in terms of how do they affect the cluster performance and how can we improve the cluster performance.

在处理器层面,分析了现代处理器中常用的一些技术,包括超线程技术、超长指令技术和NUMA架构等,对不同特点的程序的性能影响和如何充分利用处理器中的一些技术提升性能;在存储层次上,结合机群系统上Cache、主存、本地辅存及网络存储的存储层次的特点,分析了数据局部性、Cache命中率等对性能的影响以及提出如何提高数据局部性;通讯系统层面分析了不同机群通讯系统及不同机群通讯网络的特点;在编程模式方面,分析和比较了目前最常用两种并行编程模式--消息传递编程模式和共享存储编程模式,在机群系统上对性能的影响因子以及提出如何提高性能的方法。

The failure rate; that failure rate monitor is unit trends monitoring to apply below distinct Cache size with the process and the Cache that increase advantageous position differentiates algorithm expanded the Cache with conventional optimum failure rate differentiates algorithmic, numerary of Cheng of basis application line differs to give application to gift when undertaking Cache differentiates different power worth, the application that has more line Cheng in order to make obtains more sharing Cache, improve integral performance of the system thereby.

失效率监控器以进程为单位动态监控在不同的Cache容量下应用的失效率;而加权Cache划分算法扩展了传统的失效率最优的Cache划分算法,根据应用线程数目的不同在进行Cache划分时给应用赋予不同的权值,以使具有更多线程的应用获得更多的共享Cache,从而提高系统的整体性能。

There used to be two primary methods to optimize program data locality: the first is to analyze cache behavior of program data, then basing on cache behavior model, build a program analysis tool which shows the bottlenecks of data cache performance and hence directs programmer to tune performance of data cache; the second is program transformation by an optimizing compiler or an optimizing tool, and data cache performance is optimized in the transformation.

过去在这方面的工作有两个重要的思路:一是针对程序运行时的访问数据,利用相关的Cache行为模型,建立一些程序分析工具,从源代码级给出程序Cache性能的瓶颈,指导程序员通过程序变换来优化程序的局部性,从而提高Cache性能;二是在编译器上开发编译优化过程,或者开发专门的程序优化工具,通过对程序进行分析,在此基础上自动进行程序变换,包括代码变换和数据变换两种,优化程序的局部性,从而提高 Cache性能。

Studied the expansion of the Dual-port Data Cache, designed a Sixteen-ported Data Cache, The simulation results showed that on the average, access time is decreased 20% over Single-port RAM based Data Cache.

对双端口数据Cache进行扩展研究,设计了一个16端口数据Cache,与单端口实现的16端口数据Cache相比,数据Cache平均访问时间降低了20%左右,且硬件实现相对简单,占用芯片面积少。

The work in this thesis is part of a Preliminary Research Projects. Based on the design of 32-bit embedded microprocessor "Longtium R2", designed and implemented the Dual-port Data Cache, which is applied to the "Longtium R2" microprocessor, achieved synchronization snooping, and effectively improved the processors performance. Based on the Dual-port RAM, studied Multi-port Data Cache, proposed a Sixteen-ported Data Cache.

本论文研究内容来自西北工业大学航空微电子中心所承担的某预研课题,以参与的32位嵌入式微处理器"龙腾R2"的设计工作为基础,设计并实现了基于双端口RAM的数据Cache,该Cache应用于"龙腾R2"微处理器,能够实现数据同步侦听,提高多机环境下处理器的性能;并在双端口RAM基础上,对多端口数据Cache进行研究,提出一个16端口数据Cache的实现方案。

Simulation results indicate that by using sequential access,power reduction of 26% on a cache hit and 35% on a cache miss is achieved.

当采用串行访问方式时,该四路组相联cache的功耗比采用传统并行访问方式在cache命中时降低26%,在cache失效时降低35%。

In order to reduce the conflict of L1P in CMT,we propose the policy of logic partition L1P Cache by n power of 2 and the competing loop lock.Now the fairness researchs always need single thread sample phase,we propose a novel fairness policy:FROCM,it doesn\'t need single thread sample phase.We propose ring cooperant L1 data Cache,which can reduce both the complexity of the design and the load of L2 Cache.We also propose a method to exchange threads dynamicly based on fast-shared data pool,it can detect the data consanguinity of two threads in real time and exchange them into one core rapidly.At last,we design and implement a dual-core and dual-thread VLIW prototype YHFT DSP/DS based on the above studies.In order to enhance the bandwidth of data path and reduce the delay of critical path delay of CMT processor,we design a 10R/6W register file full customly.

为了减小多线程运行时指令Cache的冲突,本文提出了二幂等分指令Cache策略和循环锁竞争机制;现有对CMT处理器公平性的研究常常需要中断其它线程进行单线程采样,针对这个问题本文提出了多线程公平性策略FROCM;本文提出了环形协同数据Cache结构,以解决CMT处理器中共享存储体负载重,冲突大的问题;本文还提出了基于快速共享数据缓冲池的线程动态交换技术;最后本文实现了一个双核同时多线程芯片原型YHFT DSP/DS。

Then,we give some description about the working principle,organization,function of router-cache and where it should be. We also gave out a cache coherence protocol for multiprocessors systems with router-caches.

本文接着描述了路由器cache的工作原理,对其的组织、在系统中的位置、应有的功能等方面进行了讨论,并设计了含有路由器cache的多处理器系统的cache一致性协议。

第1/49页 1 2 3 4 5 6 7 8 9 ... > 尾页
相关中文对照歌词
Cache Is Empty
Cache Cache
推荐网络例句

But in the course of internationalization, they meet with misunderstanding and puzzlement.

许多企业已经意识到了这一点,但在国际化的进程中,仍存在一些误区与困惑。

Inorder toaccomplish this goal as quickly as possible, we'll beteamingup with anexperienced group of modelers, skinners, and animatorswhosenames willbe announced in the coming weeks.

为了尽快实现这个目标,我们在未来数周内将公布与一些有经验的模型、皮肤、动画制作小组合作。

They answered and said to him, Are you also from Galilee?

7:52 他们回答他说,难道你也是出于加利利么?