Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DRC: drc(width) much slower than width check #1195

Open
klayoutmatthias opened this issue Nov 27, 2022 · 1 comment
Open

DRC: drc(width) much slower than width check #1195

klayoutmatthias opened this issue Nov 27, 2022 · 1 comment
Assignees
Labels

Comments

@klayoutmatthias
Copy link
Collaborator

Test case from: #1189

The following code:

deep
metal1 = polygons(34, 0)
metal1.drc(width <= 0.34.um)

is much slower than

...
metal1.width(0.34.um + 1.dbu)

(~60s vs. 9s)

klayoutmatthias added a commit to klayoutmatthias/globalfoundries-pdk-libs-gf180mcu_fd_pr that referenced this issue Nov 29, 2022
1. No input merging in deep mode

Deep mode benefits from a hierarchical distribution
of large polygons across the hierarchy (e.g metal1).
So merging does not happen in deep mode.
(BTW: the same question is not easy to answer for
tiled mode. I'd rather opt not to merge at all as
this happens internally anyway).

2. drc(width < ...) replaced by width

See issue KLayout/klayout#1195

In the future maybe drc will have same performance than
width.

3. Precomputation of some redundant expressions

e.g. contact_logic = contact.outside(sramcode) use a couple of times

This patch by far does not capture all the redundant expressions.
There is some room for improvement. I addressed some dominant
ones.

4. chosing enclosed instead of enclosing (CO.6)

a.enclosing(b) is equivalent to b.enclosed(a), but may be more
efficient. Here: contact.enclosed(metal1) is more efficient
than metal1.enclosing(contact).

Reason: the first argument of all operations usually determines
the hierarchical scope and lookup region. contact is more localized
and more sparse than metal 1, so traversing contact hierarchically
is easier (basically only standard cells to visit) and collecting
the neighborhood only means looking up a small region of the layout
for interacting shapes. That makes this operation more efficient
with contact as the first argument.

BTW: same is true for the symmetric operations such and "and"
or "separation".

5. Avoid long "or" chains

e.g. this:

plfuse.not_outside(comp.or(nplus).or(esd).or(sab).or(resistor).or(metal1).or(metal2))

is more efficiently written as:

plfuse.not(plfuse.outside(comp).outside(nplus).outside(esd).outside(sab).outside(resistor).outside(metal1).outside(metal2))

Reason: The "outside" operations will gradually reduce the complexity
of the layer. For no plfuse present, the outside operation is
essentially a null operation. So the argument to "not" will compute
very efficiently while the above "or" chain will first compute a
very complex region just to discover we need little or nothing
of it.

BTW: "or" is a real polygon merge and usually is more
expensive than required. An alternative is "+" which just
joins the polygon collections.

6. Rewriting MDN.3a, 3b, 4a and 4b.

The main goal is to avoid long-distance width interactions.

The basic idea is to consider the real gate area of the LDMOS
device which is "(poly & comp) - mvsd". This is the basic
region to consider as it defines the channel. A basic width
check on these rectangles is easy and allows to quickly
filter out devices outside the range of interest. Channel
width and length can be differentiated by looking at edges
inside comp or poly2.

Note that "inside_part" and "outside_part" deliver edges which
are propagating inside the second argument (a region) or
entirely outside the region. In contrast to "and" and "not"
which will treat edges on the region boundary as "inside".

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).

7. Rewriting MDP.1, 1a and 2.

See 6. for the motivation and technique.

8. Adapting "regression.py"

This was required as "deep" is used as keyword in the "merged"
decision (see 1.).
@klayoutmatthias
Copy link
Collaborator Author

There is a connection with this issue: #1215

See comments below there and a test case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant