全网最详细Lattice ddr3教程之仿真和时序约束全攻略-Lattice-莱迪斯论坛-FPGA CPLD-ChipDebug

全网最详细Lattice ddr3教程之仿真和时序约束全攻略

Lattice ddr3教程全攻略之仿真篇

对于这部分,首先建议安装好diamond,modelsim se 10.1a这两个软件,如果想仔细分析DDR3的IP部分,可仔细阅读DDR3 SDRAM Controller IP Core User’s Guide,下面用ug代指。官网上可以download,还有DDR3的基本知识,可以到网上download《高手进阶,终极内存技术指南——完整进阶版》,这个文章值得一读。

Lattice的ddr3控制器接口逻辑还是相对简单,比较好理解的,下面来看看DDR3 IP 的内部结构:

0564d4ec48204559

图:DDR3 IP逻辑框图

Initialization Module:根据JEDEC.的标准,在上电后对DDR3进行初始化配置,配置其相应的寄存器和工作方式等,具体配置的寄存器可以参看ddr3的协议文档,JEDEC规范写的很详细。当初始化配置完成后,该模块会给出一个done信号告诉用户。

sysCLOCK PLL:用于提供IP工作所需要的时钟,并提供给用户端一个时钟:k_clk。

Data Path Logic:用于从DDR3读取的数据转换到用户端,写入的数据不经过该模块,写入的数据从Command Application Logic (CAL)模块输入。

Command Decode Logic (CDL) :该模块用于译码命令,控制core按照设定的命令正确的访问ddr3芯片。

DDR3 PHY:用于转换单端的数据转换为差分给到ddr芯片端,和差分转单端输入。

以上部分有基础了解就行,不需要深究。

4a47a0db6e204612

图:DDR初始化时序

当上电后,用户应该将init_start拉高至少200us,直到init_done被拉高一个周期,则将init_start拉低。用户检测到init_done被拉高后就表明芯片初始化完成了,也大概可以确定硬件是OK的。可以进行下一步操作。读和写控制非常简单,ug上说的很清楚,就不一一赘述了。

做DDR3设计之前当然是先仿真,这个毋庸置疑,那么仿真当然首选Modelsim,在仿真之前,先做好准备工作,那就是先编译好仿真所需要的库文件,编译库文件方法和Altera Xilinx类似,见《在Modelsim中建立LATTICE仿真库》一文,已共享。lattice的资料做的确实不如xilinx和altera。也是很多使用lattice fpga的工程师经常抱怨的一点,没有前人带的情况下使用lattice确实是非常恼火的。但是却也没有江湖传言的那么难用,其实很多时候并不是因为难,而是我们不愿意去了解,因为陌生而导致的恐惧,先入为主吧。那,废话不多说,开始进入主题。

现在假定你lattice库文件已经编译成功,那,我们当然要物尽其用,尽可能收集多的资源加快效率。。。此处省略一万字。首先说明,lattice的DDR3是带有Modelsim仿真例程的,只要在IP例化好了之后找到:***\ddr_p_eval\ddr3core\sim。然后在modelsim se中敲do ddr3core_eval.do就可以了,正常情况下是可以运行得到结果的,而且仿真初始化时间短,比altera快,这一点本人非常满意,而且激励写的很好懂,多以任务函数形式调度,这一点秒杀altera,altera还用的system verilog写的,比较坑。

有时候直接用生成的例程一点问题没有,但就在刚刚,我又验证了一下出现了。。。

fb5c81ed3a204716

这就无语了,建议网友可以自己试一试。

当然还有另外个更好的例程,否则这个教程便没有意义,在官网上下载ddr3 demo,这个demo写的都是可综合的仿真代码,方便移植,其控制DDR3 IP的状态机效率高,但,不方便扩展,它带有modelsim仿真工程,这便是我们所需要的,其他的建议用户改成自己的逻辑。

第一步,当然是去官网download了,它也不会凭空掉到你的硬盘里不是么…

10fb15c772204654

Download下来之后打开

09dd8c2662204740

这里添加了IP文件和用户逻辑,接下来修改个人所需用的device,然后打开IP配置文件,进行重新配置,其中ddr3_demo.lpf是工程生成的时序约束文件,今后我们的约束都添加到这个文件中,这个要谨记,时序约束部分会在时序篇中介绍。

下面做如下操作:

8266e4bfed204754

你把它更换为自己所需的器件,这里选的是ECP3-35,各位请随意,但是要确定你的芯片支持DDR3。在此之前你需要先安装好DDR3 IP配置软件,点击ip server进行在线download。然后点击IP,出现如下界面

9eb9cd58b9204808

安装好IP后进行如下操作

602e8f042f204818

进行IP重配置

打开原IP文件进行regenrate,module output选verilog,开始进入配置页面

7afbb16026204830

Selcet memory:选择memory型号,lattice所列出来的比较少,一般选custom,其他参数自己设置;

Clock:Memory所工作的时钟频率,有支持两种:400MHZ,即DDR3跑到800MHZ,;300MHZ,即DDR3跑600MHZ。这里注意要确定两点:1.所选器件是不是-8速度等级,因为只有-8的器件才能跑到400MHZ;2.memory最低能不能支持600MHZ,因为有些memory是800MHZ起跳,这个要结合ddr3的datasheet确定。另外,如果选择300MHZ,那么用户还需要在生成的IP文件中去修改ddr3_pll.v这个文件,修改其配置参数,如下:

586e508f16204848

大家可以在后面打开自动生成的ddr3_pll文件和我这个进行对比就能发现不同点,其实就是配置PLL的参数,这个也是lattice坑爹的地方之一,不人性化。。。

Memory configuration:其他参数都很好理解,提醒一点

59b2900aa0204859

这三个参数什么意思呢?

Unbuffered DIMM:无缓冲型模组,这是我们平时所用到的标准DIMM(dubule in-line memory module,双列内存模组,所谓双列是指模组电路板与主板插槽的接口有两列引脚,模组电路板两侧的金手指对应一侧引脚),分为有ECC和无ECC两种,简称unb-dimm。

On-board memory:其实就是颗粒了,这里例程中选的这个。

Registred dimm:寄存器型模组,这是高端服务器所使用的DIMM,分有ECC何无ECC两种,但是市场上几乎都是有ECC的,简称reg-dimm。

其他配置见《DoubleDataRateDDR3SDRAMControllerIPCoreUsersGuide.pdf》

9eb60bc8bf204913

第二页:Row size和colmn size和datasheet一致,其他保持默认即可

第三页:memory device timing,如果你是初学,建议默认,这个要结合自己所选择的chip参考datasheet填入相应的参数。

c00b575577205102

第四页:需要注意以下几点

7b6fbd4c59205121

LATTICE DDR3 IP支持放在芯片左边和右边两种情况,这个和它的器件特性有关,因为他的DQ的缘故,pin side选择好了之后,clk_in pin随之确定,这个即外部时钟输入管脚,一般都为100MHZ,差分输入,这个是不能随便连的,这里用的L5,到时这里都要添加到时序约束里面去。

PLL_used:是使用哪一个PLL,一般采用就近原则,PLL_R35C5是这个PLL所处的硬件模块名称,包含有位置信息,参考ecp3 handbook也要添加到时序约束中。

em_ddr_clk:FPGA输出给DDR3颗粒的时钟所处的BANK位置

DQS_0,DQS_1:J3,P5,这个是和选择的器件有关,可以结合器件资源修改,但是需要准从一定的DDR3管脚放置原则,DQS,DQ应该都放置在同一个BANK中,不应该跨BANK放置,否则fit不能过,SSN也不好。最终还是以时序约束通过为准。

第五页默认设置

因为DDR3颗粒跑400MHZ,16位宽,IP给出的用户端数据位宽为64位,所以用户时钟应该是200MHZ。

最后点击generate…

喝杯茶…等待一会儿….

直到看到…

d642f8c3d2205135

说明IP例化成功,然后close 掉。

现在开始仿真,仿真这里极力推荐用脚本文件形式写,虽然第一次较为麻烦,而且大多可以在已有的基础上修改脚本,掌握后其实很简单,绝对是属于一劳永逸的事情,下面看好了:

打开do ddr3_ecp3_demo.do,更改为自己的diamond地址

1e41254412205147

建立work工作区

c9baca3cda205157

映射所需加载的文件夹,这里包含了Lattice库文件(ecp3,pmi),仿真所需的源文件,仿真激励,IP文件,memory模型。这里tcl命令书写方式需要好好体会。以后用户自己的源文件放在src文件夹就OK。

88399fdcf8205215

编译所有文件

ba6beb7ae2205225

运行

079f4fb55b205245

修改好了.do文件后,然后打开modelsim 10.1a,敲:cd E:/ddr3_demo/user_logic/sim/modelsim 进行更换工程位置。然后敲do ddr3_ecp3_demo.do,然后你可以看到编译,加载过程…

7134f8f5ac205306

然后波形打开,然后出现漂亮的波形,大功告成…

75c168b671205318

Init_done拉高了。。。。

7ae5e99a8c205431

看到了波形后,很容易很出其实就是一个不断读写校验的过程,在这基础之上修改自己的代码。

我这里用的modelsim se 10.1a版本,另外用了modelsim se 10.2c仿真,却出现了下面的错误

2484a7df36205454

貌似是对system verilog的支持问题,这究竟是什么原因本人也没搞明白,期待网友提点。

testbench文件中的ddr3_test_top_tb文件为激励文件,它提供了仿真所需要的时钟和复位等。其中仿真文件中下面两句话必不可少,这个是用lattice器件所特有的:

GSR GSR_INST(.GSR(VCCI_sig));PUR PUR_INST(.PUR(VCCI_sig));

其中VCCI_sig为1。也可以这样写:

GSR GSR_INST(.GSR(rest_n));PUR PUR_INST(.PUR(rest_n));

rest_n 为复位信号,低电平复位,激励文件其他部分没有需要特别注意的。

如果想要更改代码,可以在源代码中更改后这样操作modelsim,在transcript界面敲入quit –sim命令,退出当前仿真工程,然后重新敲:do ddr3_ecp3_demo.do

这样就可以看到更改之后的波形。

关于DDR仿真的题外篇:

在需要用到ddr时,首先需要考虑的就是带宽的计算,这个直接决定你选用

的型号,比你1920*1080@30HZ的带宽为:1920*1080*30*16=0.9269GBIT(这里数据位宽为16的yuv)如果一个简单的进行ddr帧缓存,需要用到一读一写操作,那么对ddr的带宽要求最少就是2*0.9269BGIT,接下来考虑到ddr的效率问题,lattice的ddr ip控制器官方说效率到90%没有问题,一般建议以80%为限,也就是ddr带宽最低最好大于:(2*0.9269)/0.8=2.31725GBIT。对于一般DDR2/DDR3来说,在视频应用中容量需求肯定绰绰有余,主要考虑的是带宽,跑400MHZ的DDR2,数据位宽16位,1GBIT容量,带宽为:200*16*2/1024=6.25GBIT,效率:2.31725/6.25=37.076%;跑800MHZ的DDR3,数据位宽16位,1GBIT容量,带宽为:400*16*2/1024=12.5GBIT,2.31725/12.5=18.538%。现在DDR2使用的越来越少,价格也自然上涨,所以一般产品中会考虑到这个问题,能用DDR3最好,lattice的ecp3器件是都支持ddr2 ,ddr3的,但是在Altera的cyclone 系列中只有cyclone v以上才支持ddr3,xilinx只有sprtan 6以上才支持ddr3,所以这是个综合考虑的问题。

控制器的控制策略直接影响到DDR3的吞吐量即带宽。根据DDR3的自身特点,通常用的方法有同bank同行操作策略:只有当换行时才进行必要的precharge操作,这样可以节省大量的开销,提高有效带宽。多bank乒乓操作策略:可以先后激活不同bank的行。由于同bank同时只能有一行进行操作,读到读,读到写,写到写,写到读,操作前后都要进行激活、去激活操作,每一次操作都要浪费大量的带宽。而DDR3允许读写同时去激活、去激活其他bank的任意行,即激活多bank策略,这样就可以在结束某一个bank的读写操作后比较短的时间内进行其他bank的读写操作。这样大大提高了DDR3的吞吐量。

仿真篇到此为止,时序篇正在酝酿中…


Lattice ddr3教程全攻略之时序约束篇

在看这篇教程之前,建议先看看我的《Lattice ddr3教程全攻略之仿真篇》,假定你自己的工程仿真好了,自己的代码综合编译通过,但是呢,在“place &route design”的时候过不了,或者出现了一大堆莫名其妙的错误,让你抓狂的时候,那,怎么办呢?

这就是这篇教程的作用,让你的DDR3设计完成时序约束。。。

Now,Let’s begin…

我的文件目录:

0564d4ec48205832

如果你刚刚仿真完,用的自己建立的工程的话,那么打开xxx.lpf(我的名称是ecp3_ddr3.lpf)打开后会发现自带了下面三条语句:

COMMERCIAL;

BLOCK RESETPATHS ;

BLOCK ASYNCPATHS ;

RESETPATHS是复位管脚走全局异步复位路径;

ASYNCPATHS 避免timing analysis对所有输入IO做时钟周期,输入寄存器路径检查,一般都是定义input_setup参数来影响实际的板级timing。以上两个默认不要改动。

那,现在这种情况,我们来做静态时序分析之前,记得把map trace ,place &route trace, I/O timing analysis勾选上

4a47a0db6e205847

现在打开:xxx\core\ddr_p_eval\ddr3core\impl\看到

fb5c81ed3a205858

在这里synplify和precision 是lattice所支持的两种综合工具,我当时建立工程时选的是synplify,所以进入到synplify中打开ddr3core_eval.lpf,这个是IP CORE自己生成的约束文件,这个非常有用,要不然IP又不是我们自己写的,我怎么知道哪个路径,IO怎么约束,现在针对于这个约束文件,直接全部复制到ecp3_ddr3.lpf文件。本教程写约束命令都是在这个文件中操作,不在GUI界面,这个请注意。

现在打开spreadsheet view,查看约束信息,打开后会自动运行PIO DRC,在output中输出一些warning和error要注意分析,这里的信息非常重要。

打开后发现弹出了很多个error,我的个神,这么多错误那,是不是要抓狂了,别着急,一个个来看error info,要相信这种错误顺藤摸瓜总能够解决。

10fb15c772205915

错误信息字面上很好理解,就是你约束了这个net,但是timing analaysis工具找不到这个net,说明什么呢?

拜托,说明你约束的路径指定出错噻。。。

那么好,找到正确的路径然后修改路径咯。

这里的路径,指的是模块路径,而不是文件存放路径。

比如我要解决下面这个错误警告:ERROR – sclk_c matches no clock nets in the design.

1、那么好,首先点击period/frequency按钮弹出如下界面,找到sclk,发现它是在我UUT模块里面的,不在顶层,难怪按照官方的约束好不到这个clock net,那么好,直接关掉,然后去修改ecp3_ddr3.lpf文件,上面所提示的错误挨个这样找到相应的路径,重复N次,然后写到lpf文件中。其中net名称以这里为准。

09dd8c2662205937

##########################################################################

# Frequency Declerations

##########################################################################

FREQUENCY NET "clk_in_c" 100.0 MHz ;

FREQUENCY NET "sclk_c" 200.0 MHz PAR_ADJ 40.0 ;

FREQUENCY NET "clkos" 400.0 MHz PAR_ADJ 80.0 ;

FREQUENCY NET "sclk2x" 400.0 MHz PAR_ADJ 80.0 ;

USE PRIMARY NET "clk_in_c";

USE PRIMARY NET "sclk_c";

USE PRIMARY NET "clkos";

USE PRIMARY NET "sclk2x";

##########################################################################

# CSM logic preferences

##########################################################################

BLOCK PATH FROM CLKNET "clk_in_c" TO CLKNET "sclk_c" ;

BLOCK PATH FROM CLKNET "clk_in_c" TO CLKNET "*clkos" ;

BLOCK PATH FROM CLKNET "sclk_c" TO CLKNET "clk_in_c" ;

BLOCK PATH FROM CLKNET "*sclk2x" TO CLKNET "clk_in_c" ;

BLOCK PATH FROM CLKNET "clk_in_c" TO CLKNET "*eclk" ;

BLOCK PATH FROM CLKNET "*clkos" TO CLKNET "*eclk" ;

BLOCK PATH FROM CLKNET "*clkos" TO CLKNET "sclk_c" ;

BLOCK PATH FROM CLKNET "*sclk2x" TO CLKNET "*clkos" ;

修改为:

#####################################################################

# Frequency Declerations

#####################################################################

FREQUENCY NET "UUT/sclk" 200.000000 MHz PAR_ADJ 40.000000 ;

FREQUENCY NET "*clkos" 400.000000 MHz PAR_ADJ 80.000000 ;

FREQUENCY NET "*sclk2x" 400.000000 MHz PAR_ADJ 80.000000 ;

FREQUENCY NET "vclk_c" 50.000000 MHz PAR_ADJ 10.000000;

FREQUENCY NET "clk_in_c" 100.000000 MHz PAR_ADJ 20.000000;

FREQUENCY NET "UUT/u_ddr3_sdram_mem_top/clkos" 400.000000 MHz PAR_ADJ 80.000000;

FREQUENCY NET "UUT/u_ddr3_sdram_mem_top/sclk2x" 400.000000 MHz PAR_ADJ 80.000000;

USE PRIMARY NET "clk_in_c" ;

USE PRIMARY NET "UUT/sclk" ;

USE PRIMARY NET "UUT/u_ddr3_sdram_mem_top/clkos" ;

USE PRIMARY NET "UUT/u_ddr3_sdram_mem_top/sclk2x" ;

#####################################################################

# CSM logic preferences

#####################################################################BLOCK PATH FROM CLKNET "clk_in_c" TO CLKNET "UUT/sclk" ;

BLOCK PATH FROM CLKNET "clk_in_c" TO CLKNET "*clkos" ;

BLOCK PATH FROM CLKNET "UUT/sclk" TO CLKNET "clk_in_c" ;

BLOCK PATH FROM CLKNET "*sclk2x" TO CLKNET "clk_in_c" ;

BLOCK PATH FROM CLKNET "clk_in_c" TO CLKNET "*eclk" ;

BLOCK PATH FROM CLKNET "*clkos" TO CLKNET "*eclk" ;

BLOCK PATH FROM CLKNET "*clkos" TO CLKNET "UUT/sclk" ;

BLOCK PATH FROM CLKNET "*sclk2x" TO CLKNET "*clkos" ;

以上是修改后的时钟约束命令,FREQUENCY表示约束时钟频率;PRIMARY表示约束成走全局时钟网络,注意哦,lattice ecp3只有八个全局时钟网络,而且有器件位置限制,最后以约束成功为标志。上面vclk_c我工程中的另一个时钟和DDR3无关。注意这里的各个信号的模块路径应该以自己工程模块路径为准,以上只是本人项目中的路径,网友要修改为自己的。下面的修改方式都是以我的工程模块为准的。

2、修改好时钟后,再来一次place &route design…发现error少了好多,顿时舒服不少,那接着来。

0564d4ec48210107

哦,找不到rst信号,去看看你自己的工程中,复位应该用的不是这个名称,so,fix it…

BLOCK PATH FROM PORT "reset_*" ;

修改后,保存ecp3_ddr3.lpf文件,CHECK PIO DRC后发现没有error了

Oh,yeah….到这里还没完哦,现在只是PIO DRC OK了,只能说完成50%了。

3、点击place &route design,再来一次时序分析,等待分析结果…分析完成后打开timing analysis view

然后还发现了这么多错误,那么继续找原因了。

f19c908512205954

打开diamond软件下方的warning信息,可以得到很多提示,那一个个来

9eb9cd58b9210212

上面的提示意思又是没找到COMP,那说明还是路径问题,那么还是继续修改路径:

LOCATE COMP "U1_ddr3_pll/PLLInst_0" SITE "PLL_R35C5" ;

LOCATE COMP "U1_clocking/sync" SITE "LECLKSYNC2" ;

修改为:

LOCATE COMP "UUT/u_ddr3_sdram_mem_top/U1_ddr3_pll/PLLInst_0" SITE "PLL_R35C5" ;

LOCATE COMP "UUT/u_ddr3_sdram_mem_top/U1_clocking/sync" SITE "LECLKSYNC2" ;

又有:

602e8f042f210151

LOCATE PGROUP "U1_clocking/clk_phase/phase_ff_0_inst/clk_phase0" SITE "R24C5D" ;

LOCATE PGROUP "U1_clocking/clk_phase/dqclk1bar_ff_inst/clk_phase1a" SITE "R34C2D" ;

LOCATE PGROUP "U1_clocking/clk_phase/phase_ff_1_inst/clk_phase1b" SITE "R34C2D" ;

LOCATE PGROUP "U1_clocking/clk_stop/clk_stop" SITE "R34C2D" ;

修改为:

LOCATE PGROUP "UUT/u_ddr3_sdram_mem_top/U1_clocking/clk_phase/phase_ff_0_inst/clk_phase0" SITE "R24C5D" ;

LOCATE PGROUP "UUT/u_ddr3_sdram_mem_top/U1_clocking/clk_phase/dqclk1bar_ff_inst/clk_phase1a" SITE "R34C2D" ;

LOCATE PGROUP "UUT/u_ddr3_sdram_mem_top/U1_clocking/clk_phase/phase_ff_1_inst/clk_phase1b" SITE "R34C2D" ;

LOCATE PGROUP "UUT/u_ddr3_sdram_mem_top/U1_clocking/clk_stop/clk_stop" SITE "R34C2D" ;

4、修改好了之后,再来一次place &route design,然后打开timing analysis view

7afbb16026210235

然后在daamond下面还发现了这个warning

586e508f16210348

那还是找不到这个节点,还是路径问题,接着改:

LOCATE PGROUP "U1_ddr3core/U1_ddr3_sdram_phy/read_pulse_delay_0/read_pulse_delay_0" SITE "R13C2D" ;

LOCATE PGROUP "U1_ddr3core/U1_ddr3_sdram_phy/read_pulse_delay_1/read_pulse_delay_1" SITE "R22C2D" ;

修改为:

LOCATE PGROUP "UUT/u_ddr3_sdram_mem_top/U1_ddr3core/U1_ddr3_sdram_phy/read_pulse_delay_0/read_pulse_delay_0" SITE "R13C2D" ;

LOCATE PGROUP "UUT/u_ddr3_sdram_mem_top/U1_ddr3core/U1_ddr3_sdram_phy/read_pulse_delay_1/read_pulse_delay_1" SITE "R22C2D" ;

5、修改好了之后,再来一次place &route design,然后打开timing analysis view .发现没有了红色高亮信息,perfect…

59b2900aa0210447

看到了这,你就可以大松一口气了,DDR3 IP内部的的时序约束到此就完成。

接下来就是绑定管脚,设置IO电平等基础操作,网友自己搞定了。。。

Now.Let me have a tea…

….

….

总结:DDR3 IP约束参数是在实例化IP的时候自动生成的,我们只要把它COPY进来,首先找到需要约束的信号处在那个模块路径下,不断修改各个路径到自己的工程模块中,,然后place &route design看report信息,直到没有了高亮信息出现就算完成。

接下来约束自己的逻辑,一般的设计,都是约束时钟为主,然后看报告有没有违规的逻辑或者路径,然后再反复修改。。。

Look,so easy…


LATTICE DDR3 Design tips

1、Why does ispLEVER & Lattice Diamond Place and Route generate errors when I assign DDR3 Address or Command output signals to DQS pins?

For DDR3, the Address & Command outputs are generated using the DDR registers(ODDRXD1 modules). The DQSP and DQSN pins do not support DDR registers hence the error. These outputs will need to be assigned to non-DQS pins.Please see section “DDR3 Pinout Guidelines” in Technical Note TN1180 for all the DDR3 pinout rules.

2、Can the CLKP/CLKN outputs of the DDR3 memory controller be placed on the top side of the LatticeECP3 device?

The DDR3 (Double Data Rate – 3) CLKP/CLKN pads use a generic output DDR function (ODDR). The recommendation is to place the CLKP/CLKN outputs on the same side that the DQ and DQS pads are located. This is because the top side pads are not for the high-speed DDR function that can safely meet the DDR3 performance requirement on the LatticeECP3 device. Note that DDR3 DQ/DQS pads can be located only on the left or right side. Therefore, it is recommended that you locate the CLKP/CLKN pads on the left or right side depending on where the DQ/DQS pads are located.See TN1180 LatticeECP3 High-Speed I/O Interface, DDR3 Pinout Guidelines section for more general pinout guidelines.

3、Does the Lattice DDR3 IP core automatically perform the ZQ calibaration and Auto Refresh commands during or after the initialization?

During initializationThe DDR3 controller IP core performs both ZQ calibration long (ZQCL) and auto refresh commands during the DDR3 initialization process. It is a requirement defined by JEDEC DDR3 specification. After initializationAfter the initialization process is completed, the auto-refresh is still performed by the core at the interval configured with the tREFI parameter (Refresh interval time) and the number for the Auto-Refresh command burst (Auto Refresh Burst
Count). Therefore, there is no need for you to do anything for the auto refresh.
As for the ZQ calibration, it is an optional process for you to perform the calibration on demand basis. The DDR3 IP core does not provide auto-periodic ZQ calibration once the initialization process is completed. However, the core provides two user commands, ZQ_LNG (ZQ calibration long) and ZQ_SHRT (ZQ calibration short), to calibrate the DDR3 memory as needed. Since this process may impact the throughput and it is not a requirement once the initialization is completed, the ZQ calibration control will only run if implemented by the user.

4、Can I connect both the “mem_rst_n” and “rst_n” signals in the Lattice DDR3 IP core together to a system reset to meet the JEDEC initialization requirement?

The “rsn_n” signal resets both the DDR3 memory controller and the DDR3 memory devices while the “mem_rst_n” signal resets only the DDR3 memory devices. The JEDEC specification has two different cases of reset initialization. Power-up reset initialization: The memory reset needs to be asserted at least 200us with stable power. In this case, there is no need for the memory clock (CK) to be stable according to JEDEC. Since the DDR3 IP core does not provide a wait counter for this requirement, it is user’s responsibility to ensure to meet the required reset duration. 2. Reset assertion with stable power: Once the reset is asserted, according to JEDEC, it is required to remain below 0.2 * VDD for minimum 100ns. The Lattice DDR3 IP core supports this requirement. When you assert a reset pulse which is shorter than 100ns on mem_rst_n, the core will ensure it is asserted at least for 100ns.With these conditions, you can connect your system reset to both “rst_n” and “mem_rst_n” if your system reset duration is guaranteed longer than 200us after power becomes stable. If not, you will need to keep the mem_rst_n signal asserted at least for 200us with stable power to follow the JEDEC memory power-on reset requirement.

5、How can I configure the DDR3 memory clock to double the reference frequency (1:2:1 ratio) instead of multiple of 4x (1:4:2)?

The CSM (Clock Synchronization Module) module of the DDR3 memory controller ipcore multiplies the input reference clock frequency four times for the DDR3 bus operations and two times for the local bus operations. This means that the DDR3 IP core uses 1:4:2 ratio (input clock vs. DDR3 clock vs. local clock). If you use a DDR3 IP core version 1.2 or later (or any DDR3 PHY IP core version), you can manually change this ratio by following the steps below:

  1. 1. Open the ddr3_pll.v file inside the models folder using a text editor. It is located under ddr_p_eval\models\ecp3.
  2. 2. Launch IPexpress and select “PLL”. Configure the PLL with the options shown in the ddr3_pll.v file. Make sure you assign the module name to “ddr3_pll”.
  3. 3. Change the input and output clock frequencies to your desired values. If you want to use 150MHz as DDR3 reference clock input and DDR3 memory clock is 300MHz, you can set CLKOP=300.0MHz CLKOK=150.0MHz. Click “Calculate” then “Generate”.
  4. 4. If the generated PLL has more input or output pins than the original ddr_pll.v, you may need to manually edit the generated file so that the module can be properly instantiated. Use the original ddr3_pll.v file.
  5. 5. As an alternative, you can edit the original ddr_pll.v file with the divider values and parameters from the generated PLL module. You can select whichever way you feel more convenient.

6、I cannot assign the DDR3 memory clock (CK) pads to Bank 1 during the DDR3 core generation when the left side of a LatticeECP3 device is selected for a DDR3 interface running at 300MHz. How can I use the pins in Bank 1 for CK?

LatticeECP3 has the following pinout guideline for a DDR3 CK pair assignment (See TN1180 DDR3 Pinout Guidelines section.):
It’s recommended that the CK pads are located on the same side as data pads when the DDR3 bus is running at high speed (400MHz).
At a lower operating speed such as 333 or 300MHz, however, CK can be located on either the same side as data pads or a top-side bank. In this example, both Bank 0 and Bank 1 are legal locations to accommodate a CK pair if your target speed is 300MHz or 333MHz. The reason why the DDR3 IP core allows only Bank 0 in this case is because assigning the CK pad to Bank 1 is generally not practical in terms of pin resource allocation and static timing achievement. If the CK pads are located on the other side of the top bank, for example, it may cause a static timing failure if the internal routing delays are excessive. Although a pair in Bank 1 can be used as CK, the DDR3 IP core does not encourage you to use it due to this reason.
If you have to use a pair in Bank 1, you can generate a DDR3 IP core with CK assigned to Bank 0 first. Then, you can simply update the target bank for the CK pair from Bank 0 to Bank 1 in the preference file (.LPF) as shown below.
DEFINE PORT GROUP “EM_DDR_CLK_GRP” “em_ddr_clk_*” ;IOBUF GROUP “EM_DDR_CLK_GRP” IO_TYPE=SSTL15D BANK=1 ;
You will need to make sure not to violate the static timing requirement.

7、What should I do with the LatticeECP3 DDR3 memory interface VTT termination?

Only external VTT termination should be used for LatticeECP3 DDR3. Use of LatticeECP3’s internal on-die termination (ODT) is not recommended. According to the eye diagrams and simulation results from the Lattice factory tests, the recommended external resistor value is 100-120 ohms. It is encouraged that board designers perform the signal integrity simulations if possible for best resistor value in their environment. Below is a general termination guideline for LatticeECP3 DDR3 external VTT termination:
1. Placement of any external discrete resistors or resistor packs (RPACKS) is critical and must be placed within 0.6 inches of the LatticeECP3 ball.
2. 120 Ohm BGA RPACKS (CTS RT2432B7 type) are recommended for the 64- and 32-bit interfaces due to better routing and density issues. Each RPACK contains 18 resistors in a very small BGA footprint. Note that only 120, 75 and 50 ohm values are available in this package type.(http://www.ctscorp.com/components/Datasheets/ClearOneDDRSDRAMK.pdf) 3. 4×1 RPACKS (CTS 741X083101JP type) can also be used for the cases that 100 ohm value is needed without having routing/density issues. (http://www.ctscorp.com/components/Datasheets/CTSChipArrayDs.pdf)

8、Can I use a different rate of the input reference clock other than 100MHz when a 400MHz/800Mbps DDR3 interface is implemented?

Yes, you can. If you are using a Lattice DDR3 memory controller IP core version 1.2 or later, you can use a different rate of input reference clock. The original clock synchronization module (CSM) in the earlier version DDR3 IP cores require the fixed 1:2:4 ratio of clocks among the reference clock input (clk_in), system clock (sclk) and the DDR3 clock (eclk), respectively. The newer version CSM in the v1.2 or later supports variable clock ratios between the input reference clock and the system clock. The ratio between the system clock and the DDR3 clock must remain in 1:2 ratio.
If you use a 75MHz input reference clock for 400MHz DDR3 operations, for example, the supported clock ratio becomes 75MHz(clk_in) : 200MHz(sclk) : 400MHz(eclk).
Note that the CSM from the generated DDR3 IP core has the 1:2:4 ratio by default, and you will need to regenerate the PLL module to provide your desired clock ratio.

9、How can I terminate a DDR2 or DDR3 memory interface to VTT in LatticeECP3?

LatticeECP3 requires external termination to VTT for DDR1, DDR2 and DDR3 memory interface implementations. All DQ and DQS pins should be terminated to VTT using external termination resistors. The VTT level is 1/2 of VCCIO (0.9V for DDR2 and 0.75V for DDR3). SSTL (Stub Series Terminated Logic) I/O signaling requires parallel termination to VTT on the receiving end. While DDR2 memory has the ODT feature to fulfill this requirement for the write operations, the LatticeECP3 side should also have the termination for the read operations. The external termination resistors are used for this purpose. Note that LatticeECP2/M and LatticeXP2 use the same external termination scheme as LatticeECP3 for DDR1 and DDR2 memory interfaces.
It is suggested that you perform SI (signal integrity) simulation to obtain the best termination resistor value. If the SI simulation is not available, you can use the Lattice factory recommended values (75-ohm for DDR2 and 100- to 120-ohm for DDR3). A shorter trace length between a termination resistor to a LatticeECP3 ball is also crucial for better signal integrity results. Lattice recommends that you make a trace length between them no longer than 0.6”.

10、The availability and cost of a 1.5V clock driver make it an unattractive solution for driving the reference clock input of the DDR3 memory interface, are there any alternatives?

There are several alternatives that can be used to drive the LatticeECP3 DDR3 reference clock input:
1. Use an LVDS clock driver and connect directly to the DDR3-dedicated PLL input pair. LVDS25 is a compatible I/O type that can be used in a 1.5V VCCIO bank. This method provides you with the best signal integrity result.
2. You can internally drive the DDR3-dedicated PLL through the primary clock net. Choose an I/O bank of the device with an input level that is compatible with the clock driver you are using. Connect the clock driver to the PCLK (primary clock) input pad (or differential pair) of that bank. While the primary clock can add some amount of clock net jitter to the PLL, this method is still an acceptable solution that can be used as a secondary option. This option is also good for the single-ended clock driver.
3. Another option is to use a resistor-divider circuit that translates your clock driver output level to a compatible level of the 1.5V VCCIO bank. This method is useful when the clock driver is single-ended.

11、The generated DDR3 IP core includes a DDR3 DIMM (dual in-line memory module) instantiation module (ddr3_dimm_32.v) in the testbench when the selected memory type is On-board Memory. How can I instantiate the DDR3 device memory model in my testbench for simulation?

Although the file name includes “dimm”, the generated memory instantiation module such as “ddr3_dimm_32.v” is a memory wrapper that covers all DDR3 memory configurations including the On-board Memory type. This wrapper module includes all possible memory configurations and types including UDIMM, RDIMM, discrete memory with write leveling and address mirroring considerations . Therefore, it is okay for you to use this memory module for the simulation of any generated DDR3 IP core.If you do not want to use the memory wrapper file generated under the On-board Memory option, you can directly instantiate the memory model. You just need to make sure that the ddr3_parameters.vh file is properly included in the testbench. The memory model, ddr3.v, cannot be run without this parameter file.

12、Should I reset the Lattice DDR3 controller IP core after changing the “read_pulse_tap” signal?

The “read_pulse_tap” port is an input signal to the DDR3 controller IP core. Each DQS group has its own 3-bit read_pulse_tap port to control the READ signal timing to the DQSBUF hardware module. The DDR3 IP core allows dynamic READ pulse timing changes. Therefore, the value can be changed dynamically and there is no need to reset the DDR3 IP core.It is a good idea to change the read_pulse_tap value during the bus idling state instead of changing during DDR3 transactions to avoid any possible instantaneous data corruption. Please note that the read_pulse_tap signal is used as a static input in real applications although the IP core allows dynamic changes.

13、What is your recommendation to reduce or eliminate SSO noise related issues for DDR3 interface implementation using a LatticeECP3 device?

The following are the general SSO (simultaneous switching output) considerations and guidelines for DDR3 interface implementations:
Proper termination is needed to minimize SSO impacts. With sub-optimal termination, the SSO noise can be aggravated because the signal energy has no place to go but into the supply or ground plane. Follow the DDR3 termination guideline specified in TN1180.
Write leveling is the best way to decrease SSO. Make sure you turn on the Write Leveling option during the core generation if your application uses DDR3 DIMM. Write leveling will spread the read DQS/DQ arrival time to FPGA in time domain, which essentially spreads out noise and makes its peak noise level much lower.
Check your slew rate and drive strength settings. When you use slow slew and 8mA SSTL15 driving strength is 8mA, it would generate less amount of SSO compared to fast slew and 10mA.
Check the noise on VCCIO when the SSO noise is measured. If you see the same or similar pattern of noise on VCCIO, this could be a contributor to the noise. If you see the same noise, the pseudo powering will be helpful. Use of pseudo power pads helps noticeably decrease SSO. This would be an effective way to tame SSO noise. If you have unused I/O pads in the DDR3 banks, make them to be pseudo VCCIO and GND pads by connecting them to the VCCIO power and GND source on the PCB. Then set them to OUTPUT with driving High with maximum driving strength. You can set SSTL15 10mA output for them. They will provide more VCCIO power and stable grounding and decrease SSO noise. It is recommended more than 2/3 of pseudo power pads be connected to VCCIO.
Spread data DQS group pads as much as possible in a bank. If you have 7 DQS groups in a bank and want to implement a 32-bit DDR3, for example, having them assigned to “d,x,d,x,d,x,d” will have significantly lower SSO impact than a consecutive pad assignment like “x,x,d,d,d,d,x”. (where x: non-data DQS, d: data DQS)
If SSO noise on the address and control signals is concerned, use of series termination resistors on the address/command lines would help decrease SSO. 22-ohm or smaller value is recommended.
Isolating the address and command signals from the switching DQ signals to a different bank is also a good way to decrease SSO.
Probe measurement is also an important factor due to added noise from the ground loops and plane resonances. Make sure the ground lead of the probe is as short as possible, preferably less than 1/2 inch.
Well considered PCB layout is crucial to minimize the system’s SSO impact. Follow the generally known high-speed PCB implementation guidelines.

14、Which VREF pad should I use between VREF1 and VREF2 for a DDR2 or DDR3 memory controller?

Only VREF1 should be used for all DDR1, DDR2 and DDR3 memory controller applications. This is because only the VREF1 pad includes a dedicated circuit to detect preamble stages on the DQS signal coming from the memory device. It is important to know that you must not connect VREF1 to VREF2 together. It is because the detector circuit characteristics can be affected by the connected VREF2 pad when the VREF2 pad’s pull-up resister is turned on. Note that you can use VREF2 as a general I/O in DDR memory interface applications.

15、Why do I get the message “ERROR – map: IO buffer em_ddr_data_c_0 drives IO buffer em_ddr_data_pad_0 directly, but this is not possible” on most DDR3 interface signals after instantiation of a Lattice DDR3 IP core?

This FAQ is applicable to all Lattice DDR memory controller IP cores (DDR1/DDR2/DDR3/DDR3-PHY/LPDDR).
When a DDR memory interface signal uses a dedicated DDR I/O function, the DDR memory controller or PHY IP core netlist file (.ngo) includes an I/O buffer that is in conjunction with the required IOLOGIC block. Therefore, you must let the synthesis tool avoid automatic insertion of an additional I/O pad to the signal during the synthesis process. Otherwise, the synthesis tool infers an I/O pad to each of I/O port of your entire design. If the DDR3 interface signal gets the inferred I/O buffer, it will be conflicting with the one inside the netlist file, and this is why you got the error message.
Note that Lattice DDR3 controller/PHY IP core includes the I/O buffers on all DDR3 memory interface signals except the RESET# signal. Other DDR memory IP cores (DDR1/DDR2/LPDDR) include the I/O buffers only on the data (DQ) and data strobe (DQS) signals.
The user design that instantiates the IP core must follow the I/O pad handling configuration shown below:
Verilog:Use the provided black box core instantiation file ([core_name]_bb.v) found from the core root folder. This black box instantiation file includes the DDR3 signals that should not have additional I/O buffers shown below:
/* synthesis syn_black_box black_box_pad_pin=”em_ddr_data[31:0],em_ddr_dqs[3:0],em_ddr_clk[0:0],em_ddr_odt[0:0],em_ddr_cke[0:0],em_ddr_cs_n[0:0],em_ddr_addr[13:0],em_ddr_ba[2:0],em_ddr_ras_n,em_ddr_cas_n,em_ddr_we_n” */; //DDR3 32-bit IP core example
/* synthesis syn_black_box black_box_pad_pin=”em_ddr_data[31:0],em_ddr_dqs[3:0]” */; //DDR2 32-bit IP core example
VHDL:Your VHDL design that instantiates the DDR3 core following attribute declaration:
attribute black_box_pad_pin : string;attribute black_box_pad_pin of ddr3core : component is “em_ddr_data(15:0), em_ddr_dqs(1:0),em_ddr_clk(0:0),em_ddr_odt(0:0),em_ddr_cke(0:0),em_ddr_cs_n(0:0),em_ddr_addr(12:0),em_ddr_ba(2:0),em_ddr_ras_n,em_ddr_cas_n,em_ddr_we_n” ; – DDR3 16-bit example
attribute black_box_pad_pin : string;attribute black_box_pad_pin of ddr3core : component is “em_ddr_data(31:0),em_ddr_dqs(3:0)”; – DDR2 32-bit example
The wrapper file (ddr_sdram_mem_top_wrapper.vhd) in the generated core, is a good reference to follow.

16、How do I place DDR3 interface pins to minimize SSO impact?

  1. 1. Try using the DQS groups in the middle of the (right or left) edge if the DDR3 data width does not require to use the whole edge of LatticeECP3. Avoid the corner DQS groups if possible.
  2. 2. Locate a spacer DQS group between two adjacent data DQS groups if possible. A DQS group becomes a spacer DQS group if the I/O pads inside the group are not used as data pads (DQ, DQS, DM). The pads in a spacer group can be used for address, command, control or CK pads as well as for user logic or the pseudo power pads.
  3. 3. It is recommended that you locate four or more pseudo VCCIO/ground (GND) pads inside a spacer DQS group. An I/O pad becomes a pseudo power pad when it is configured to OUTPUT with its maximum driving strength (SSTL15, 10mA) and connected to the external VCCIO or ground power source on the PCB. Your design needs to drive the pseudo power I/O pads according to the external connection. (i.e., you assign them as OUTPUT and let your design drive “1” for pseudo VCCIO pads and “0” for pseudo GND pads in your RTL coding.) The recommended four pads are two pads in both ends (the first and the last ones in the group) and two DQS (positive and negative) pads in the middle.
  4. 4. You may have one remaining pad in a data DQS group which is not assigned as a data pad in a DDR3 interface. Assign it to pseudo VCCIO or pseudo GND. Preferred location is in the middle of the group (right beside DQS pads). Note that you will not have this extra pad if the DQS group includes a VREF1 pad for the bank.
  5. 5. Assign the DM (data mask) pad in a data DQS group close to the other side of DQS pads where a pseudo power pad is located. If the data DQS group includes VREF1, locate DM to the other side of VREF1 with respect to DQS. It can be used as an isolator due to its almost static nature in most applications.
  6. 6. Other DQS groups (neither data nor spacer group) can be used for accommodating DDR3 address, command, control and clock pads. It is recommended that you assign all or most DQS pads (positive and negative) in these groups to pseudo power. Since LatticeECP3 DQS pads have a dedicated DDR function that cannot be shared with other DDR3 signals, they are good pseudo power pad candidates.
  7. 7. You can assign all unused I/O pads to pseudo power if you do not have a plan to use them in the future. Assigning more I/O pads to VCCIO is desirable because LatticeECP3 has four VCCIO pads in each bank while more GND pads are available. Keep the total pseudo power pad ratio (VCCIO vs. GND) between 2:1 to 3:1.
  8. 8. Although it is not significantly necessary, it would be slightly more effective if you locate a pseudo VCCIO to a positive pad (A) and GND to a negative pad (B) of a PIO pair if possible.
  9. 9. If a bank includes unused input-only pads such as dedicated PLL input pads, connect them to VCCIO on your PCB. They can also be used as isolators and the connections on the board should provide good shielding. No extra consideration is necessary in your design.
  10. 10. It is a good idea to shield the VREF1 pad by locating pseudo power pads around it if the VREF1 pad is not located in a data DQS group.
  11. 11. Avoid fast switching signals being located close to the XRES pad. XRES requires an external resistor which is used to create the bias currents for the IO. Since this resistor is used for a calibration reference for sensitive on-chip circuitry, careful pin assignment around the XRES pad is also necessary to produce less jittery PLL outputs for DDR3 operations. See TN1180 LatticeECP3 High-Speed I/O Interface, DDR3 Pinout Guidelines section for more general pinout guidelines along with these SSO guidelines.

17、What is the maximum DDR3 device loading that can be driven by the Lattice DDR3 controller IP core?

Both the data and address/command bus loading factors should be considered to answer the question.
The Lattice DDR3 controller and DDR3 PHY IP cores were validated to allow up to two-DDR3 data pin loading on the DQ, DQS and DM signals at the rate of 400MHz/800Mbps using a LatticeECP3 device. This means that the IP cores support up to two-rank (or two-chip selects) memory configurations. If your application is DDR3 DIMM(dual in-line memory module) based, you can use a single- or dual-rank DIMM module. Use of two separate single-rank modules is not recommended because the core’s DDR3 ODT (on-die termination) control is optimized for a single DIMM configuration.
As for the address/command bus, you can drive up to 16-device loading on each address, command or control pad. 16-device loading is typical for a dual-rank DIMM. We recommend you use the 2T option when a dual-rank DIMM is used to provide the better setup and hold timing window. The 2T option is available when you generate a DDR3 core targeting a dual-rank DDR3 DIMM memory configuration.

18、Why does my regenerated DDR3 IP core have different CL and CWL values from the original LPC file that has CL=7 and CWL=6 at 400MHz?

The reason why you see the different CL (CAS Latency) and CWL (CAS Write Latency) values after core regeneration is because the IPexpress DDR3 GUI script performs the JEDEC compatibility check. The original LPC that you used contains an illegal setting which is CL=7 and CWL=6 at 400MHz. The GUI script changed them back to default due to the violated setting.
You see this regulated core regeneration if you are using a DDR3 IP core version 1.3 or earlier. To support special DDR3 applications that require to run at out-of-JEDEC compatibility ranges, the future DDR3 core releases will not perform the JEDEC compatibility check during the core regeneration.
To workaround this issue using the DDR3 v1.3 or earlier, you will need to manually set CL/CWL in the GUI when the core is regenerated.

19、Why do I have an error during the mapping of a DDR3 IP-based design, saying “Error: Output buffer drives output buffer: each IO pad requires one and only one buffer…”?

This map error is caused by the duplicated IO buffers which are located both inside the IP core netlist file (.ngo) and your top-level code that instantiates the IP core netlist. DDR3 IP cores already include all the IO buffers for the DDR3 bus signals inside the ngo file. Therefore, you must disable the IO buffer insertion during the synthesis of your top-level module. You can do this by telling the synthesis tool not to insert any IO buffer to those signals.
The following attribute should be implemented in your Verilog or VHDL top.
black_box_pad_pin
This tells the synthesis tool that the IO pads are already included in the black box (DDR3 core nelist) so that the top-level does not instantiate the additional IO buffers. See the following examples (The core name is “ddr3core” in this example.):
VHDL:
attribute syn_black_box : string; attribute syn_black_box of ddr3core : component is true; attribute black_box_pad_pin : string; attribute black_box_pad_pin of ddr3core : component is “em_ddr_data(31:0),em_ddr_dqs(3:0),em_ddr_clk(0:0),em_ddr_odt(0:0),em_ddr_cke(0:0),em_ddr_cs_n(0:0),em_ddr_addr(13:0),em_ddr_ba(2:0),em_ddr_ras_n,em_ddr_cas_n,em_ddr_we_n” ;
Verilog:
Add the following synthesis directive to the module definition (see a IP core Verilog header file for the complete structure):
/* synthesis syn_black_box black_box_pad_pin=”em_ddr_data[63:0],em_ddr_dqs[7:0],em_ddr_clk[0:0],em_ddr_odt[0:0],em_ddr_cke[0:0],em_ddr_cs_n[0:0],em_ddr_addr[27:0],em_ddr_ba[2:0],em_ddr_ras_n,em_ddr_cas_n,em_ddr_we_n” */;

20、Some of the DDR3 IP core preferences are being ignored in my design, and I am getting a few timing errors. How can this happen?

There are two possibilities:
1. The signal paths in the ignored preferences may not be correct. This usually happens when a user takes the IP core preferences as-is without getting them localized. If there is any added hierarchy, the paths in the original preferences must be updated. For example, let’s assume that you have the following form preference generated from IPexpress:
LOCATE PGROUP “clocking/clk_phase/phase_ff_0_inst/clk_phase0” SITE “R32C5D” ;
If there is any hierarchy change, for example another top-level instantiates the core with the name “ddr3”, the preference must be updated accordingly as shown below:
LOCATE PGROUP “ddr3/clocking/clk_phase/phase_ff_0_inst/clk_phase0” SITE “R32C5D” ;\u201D
See “Handling DDR3 IP Preferences in User Designs” section in the IP user guide, IPUG80.PDF.
2. The target device may have been changed. If this is the case, a new DDR3 core with the same configuration needs to be regenerated targeting the new device. This is to get the new preference set for the new target device. The location for the example preference varies depending not only on the device but also on the package size. Once you generate a core targeting a right device, you will get the corresponding locations from the generated core LPF. Make sure the DQS pin locations also get updated. Once the new preferences are obtained, their paths can be localized as explained in the #1 case above.

21、Where can I get the maximum skew data between the DDR3 CK and address/command pads in LatticeECP3?

In DDR memory interfaces, the CK rising edge is located ideally right in the middle of the address and command eyes to maximize the tIS and tIH margin. LatticeECP3 allows this by generating the CK and the address/command signals from the same phase clocks (2x and 1x, respectively) then inverting the CK output phase. Since the CK and
address/command generations use the dedicated DDR IO blocks, there is nooutstanding data-path skew difference between them. However, there is aclock skew difference between them and the difference is clearly listedin the datasheet. Since both CK and the address/command signals are driven by the primary clocks from the same PLL, take the maximum primary clock net skew data to determine the worst case window from the datasheet.
See the “LatticeECP3 External Switching Characteristics” table in the LatticeECP3 datasheet. Find your device and apply the number for the following cases:
1. tSKEW_PRIB: take this if DDR3 address/commands and CK are inside the same bank2. tSKEW_PRI: take this if DDR3 address/commands and CK are located in different banks
Note that the numbers in the table include both the clock distribution skew and IO pad skew.

22、Can I connect an external Low Voltage Differential Signal 2.5V (LVDS25) clock output to a LatticeECP3 DDR3 bank which is 1.5V VCCIO bank?

Yes, you can drive an external Low Voltage Differential Signal (LVDS) clock generator to an input pair of LatticeECP3 1.5V VCCIO bank. Although LatticeECP3 LVDS25 is characterized in 2.5V and 3.3V, you can safely use an external LVDS25 driver to drive LatticeECP3 1.5V input pads. The input of the left and right edges of LatticeECP3 has a PCI clamp circuit that clamps the input voltage at VCCIO + 0.3V. This allows you to use LVDS input up to 1.8V with the common mode voltage up to 1.75V. This provides enough DC signaling margin to the standard LVDS drivers. There is no problem in implementing in the software because the LVDS25 IO type is a compatible type to an 1.5V VCCIO bank. SSTL15D can also be used to get an external LVDS25 input.

23、How do I implement differential SSTL pads in software for my DDR memory interface design?

Differential SSTL (Stub Series Terminated Logic) I/O type is specified using a Place and Route (PAR) preference called “IOBUF”. You only need to specify the positive-end of the differential SSTL pair in your RTL design. The differential I/O appears, in your RTL, like any other single ended I/O. The software automatically assigns the negative-end pads by using IO_TYPE=SSTL18D_II (SSTL25D_II for DDR1, SSTL15D for DDR3) attribute in combination with the IOBUF preference to implement differential SSTL type. See the following example:
In RTL:(Verilog) output em_ddr_clk; inout em_ddr_dqs; (VHDL)em_ddr_clk out std_logic;em_ddr_dqs inout std_logic;
In the preference file (.lpf): LOCATE COMP “em_ddr_clk” SITE “U2”; IOBUF PORT “em_ddr_clk” IO_TYPE=SSTL18D_II; LOCATE COMP “em_ddr_dqs” SITE “AM6”; IOBUF PORT “em_ddr_dqs” IO_TYPE=SSTL18D_II;
After the design has the logic mapped and placed, the pad report file (.pad) will show the positive and negative pin assignment:| U2/6 | em_ddr_clk+ | SSTL18D_II_OUT | PL62A | LDQ67 | | U1/6 | em_ddr_clk- | SSTL18D_II_OUT | PL62B | LDQ67 | | AM6/6 | em_ddr_dqs+ | SSTL18D_II_BIDI | PL121A | LDQS121 | | AN6/6 | em_ddr_dqs- | SSTL18D_II_BIDI | PL121B | LDQ121 |

24、How do I implement multiple DDR2/3 memory interfaces in one side of LatticeECP3 when there is only one DQSDLL available per side?

LatticeECP3 devices have one DQSDLL per side. DQSDLL has an input port called UDDCNTLN that allows its DLL code value (DQSDEL output) to be updated while it is asserted Low. The updated code value is used to generate precise PVT (Process Voltage Temperature) compensated delays for DDR write and read operations while UDDCNTLN is deasserted High. The memory controller must properly control UDDCNTLN to take advantage of PVT compensated DDR write and read operations. UDDCNTLN must go active only while the memory controller is not performing any DDR read or write operations in order to avoid data corruption that may be caused by dynamic changes of DLL code. When multiple DDR3 memory interfaces are implemented in either the left or the right side, the DQSDLL in the same side must be shared so that all controllers can utilize the PVT compensation.If your DDR2/3 memory controller has an active-Low DLL update control output signal and you want to implement N number of memory controller in the same side, each memory controller output can be connected to a N-input OR gate input and the OR gate output is connected to the UDDCNTLN input of the DQSDLL.If your DDR2/3 memory controller has an active-High DLL update control output signal and you want to implement N number of memory controller in the same side, each memory controller output can be connected to a N-input NAND gate input and the NAND gate output is connected to the UDDCNTLN input of the DQSDLL.

25、Why do I need to have external VTT termination only on the DDR2/3 (Double Data Rate) data signals at the Lattice FPGA side but not for the address, command and control signals?

DDR (Double Data Rate) memory interfaces use SSTL signaling which requires parallel termination to VTT at the receiver side. The external VTT termination on data is for the memory controller side during the read operations. Since the address, command and control signals are output from the memory controller, VTT termination is not required at the controller side. Therefore, the address, command and control signals need to be terminated to VTT at the memory side because the DDR2/3 memory is the receiver for these signals.It is required that the external termination resistors on the data signals are located as close to the ECP3 pins as possible with not longer than 600-mil (0.6”) trace length. We recommend you run signal integrity (SI) Simulation to determine the best termination value. If SI simulation is not available, parallel termination of 100~120 ohms to VTT for DDR3 (75 ohms for DDR2) is recommended.

26、Can I use all availabe DQS pad in a Lattice FPGA device for my DDR1/2/3 memory controller applications?

It depends on your memory controller application. Some DQS pad groups may not provide enough number of associated DQ pads because some DQ pads may not be bonded out. Although not many DQS groups have difference DQ sizes, you should pay careful attention to choose a DQS pin and check if the all associated DQ pads for the selected DQS group are enough to meet your application need.
The decision will usually depend on whether your memory controller uses eight DQ pads or four per DQS. While majority DDR memory applications require eight DQ pads per DQS, there are some others like a RDIMM memory controller that uses only four DQ pads per DQS.To know how many DQ pads you need per DQS for eight DQ per DQS, use the following guideline:Minimum number of DDR1 DQS group pads:DQS (1) + DQ (8) + DM (1) = 10 padsMinimum number of DDR2 DQS group pads:DQS (1) + DQ (8) + DM (1) = 10 pads (single ended DQS)DQS (2) + DQ (8) + DM (1) = 11 pads (differential DQS)Minimum number of DDR3 DQS group pads:DQS (2) + DQ (8) + DM (1) = 11 pads (differential DQS)Note:1. This guideline considers DM as a mandatory signal. If DM is not required, you can subtract one from the minimum required size.2. If a DQS group includes a VREF1 pad for the bank, you have to count one additional DQ/VREF1 dual function pad.

请登录后发表评论