-
Notifications
You must be signed in to change notification settings - Fork 1
/
torchscsp.py
575 lines (434 loc) · 22.6 KB
/
torchscsp.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
"""Scale Space Toolbox for PyTorch
Extends subsets of the discscsp, gaussders and affscsp modules to PyTorch:
(discscp) For computing discrete scale-space smoothing by convolution with the discrete
analogue of the Gaussian kernel and for computing discrete derivative approximations
by applying central difference operators to the smoothed data.
(gaussders) For computing discrete approximations of Gaussian derivatives in terms
of either sampled Gaussian derivative kernels or integrated Gaussian derivative
kernels.
(affscsp) Functions for performing the equivalent effect of convolving an image with
discrete approximations of directional derivatives of affine Gaussian
kernels, including mechanisms for scale normalization as well as a mechanism
for relative normalization of receptive field responses between different orders
of spatial diffentiation.
References:
Lindeberg (1990) "Scale-space for discrete signals", IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12(3): 234-254.
Lindeberg (1993a) "Discrete derivative approximations with scale-space properties:
A basis for low-level feature detection", Journal of Mathematical Imaging and Vision,
3(4): 349-376.
Lindeberg (1993b) Scale-Space Theory in Computer Vision, Springer.
Lindeberg and Garding (1997) "Shape-adapted smoothing in estimation
of 3-D depth cues from affine distortions of local 2-D structure",
Image and Vision Computing 15: 415-434
Lindeberg (2013) "A computational theory of visual receptive fields",
Biological Cybernetics, 107(6): 589-635. (See Equation (69).)
Lindeberg (2021) "Normative theory of visual receptive fields",
Heliyon 7(1): e05897: 1-20.
Lindeberg (2022) "Scale-covariant and scale-invariant Gaussian derivative
networks", Journal of Mathematical Imaging and Vision, 64(3): 223-242.
Lindeberg (2023) "Discrete approximations of Gaussian smoothing and Gaussian
derivatives", arXiv preprint arXiv:2311.11317.
"""
import math
from math import pi
from typing import Union
import numpy as np
import torch
from pyscsp.discscsp import gaussfiltsize, variance1D
# ==>> Import from other Python package awaiting a full PyTorch interface for
# ==>> the modified Bessel functions that determine the filter coefficients
# ==>> for the discrete analogue of the Gaussian kernel
from pyscsp.discscsp import make1Ddiscgaussfilter, make1Ddiscgaussderfilter
# ==>> The following functions for affine scale space do not have a full
# ==>> PyTorch interface either
from pyscsp.affscsp import samplaffgausskernel, scnormaffdirdermask
def make1Dgaussfilter(
# sigma should be a 0-D PyTorch tensor if sigma is to be learned
sigma : Union[float, torch.Tensor],
scspmethod : str = 'discgauss',
epsilon : float = 0.01,
D : int = 1
) -> torch.Tensor :
"""Generates a mask for discrete approximation of the Gaussian kernel
by separable filtering, using either of the methods:
'discgauss' - the discrete analogue of the Gaussian kernel
'samplgauss' - the sampled Gaussian kernel
'normsamplgauss' - the sampled Gaussian kernel
'intgauss' - the integrated Gaussian kernel
'linintgauss' - the linearily integrated Gaussian kernel
The discrete analogue of the Gaussian kernel has the best theoretical properties
of these kernels, in the sense that it obeys both (i) non-enhancement of local
extrema over a 2-D spatial domain and (ii) non-creation of local extrema from
any finer to any coarser level of scale for any 1-D signal. The filter coefficents
are (iii) guaranteed to be in the interval [0, 1] and do (iv) exactly sum to one
for an infinitely sized filter. (v) The spatial standard deviation of the discrete
kernel is also equal to the sigma value. The current implementation of the this
filter in terms of modified Bessel functions of integer order is, however, not
supported in terms of existing PyTorch functions, implying that the choice
of this method will not allow for scale adaptation by back propagation.
For this reason, the alternative methods 'samplgauss', 'normsamplgauss, 'intgauss'
and 'linintgauss' are provided, with full implementations in terms of PyTorch
functions and thereby supporting scale adaptation by back propagation.
For these methods, there are the possible advantages (+) and disadvantages (-):
'samplgauss' + no added scale offset in the spatial discretization
- the kernel values may become greater than 1 for small values of sigma
- the kernel values do not sum up to one
- for very small values of sigma the kernels have too narrow shape
'normsamplgauss' + no added scale offset in the spatial discretization
+ formally the kernel values are guaranteed to be in the
interval [0, 1]
+ formally the kernel values are guaranteed to sum up to 1
- the complementary normalization of the kernel is ad hoc
- for very small values of sigma the kernels have too narrow shape
'intgauss' + the kernel values are guaranteed to be in the interval [0, 1]
+ the kernel values are guaranteed to sum up to 1 over an infinite domain
- the box integration introduces a scale offset of 1/12 at coarser scales
'linintgauss' + the kernel values are guaranteed to be in the interval [0, 1]
- the triangular window integration introduces a scale offset
of 1/6 at coarser scales
The parameter epsilon specifies an upper bound on the relative truncation error
for separable filtering over a D-dimensional domain.
References:
Lindeberg (1990) "Scale-space for discrete signals", IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12(3): 234-254.
Lindeberg (1993) Scale-Space Theory in Computer Vision, Springer.
Lindeberg (2023) "Discrete approximations of Gaussian smoothing and Gaussian
derivatives", arXiv preprint arXiv:2311.11317.
"""
if scspmethod == 'discgauss':
# ==>> Note! Here sigma is not PyTorch variable to allow for scale
# ==>> adaptation by backprop. That would need a PyTorch interface
# ==>> for the modified Bessel functions
return torch.from_numpy(\
(make1Ddiscgaussfilter(sigma, epsilon, D))).type(torch.FloatTensor)
if scspmethod == 'samplgauss':
return make1Dsamplgaussfilter(sigma, epsilon, D)
if scspmethod == 'normsamplgauss':
return make1Dnormsamplgaussfilter(sigma, epsilon, D)
if scspmethod == 'intgauss':
return make1Dintgaussfilter(sigma, epsilon, D)
if scspmethod == 'linintgauss':
return make1Dlinintgaussfilter(sigma, epsilon, D)
raise ValueError(f'Scale space method {scspmethod} not implemented')
def make1Dsamplgaussfilter(
sigma : Union[float, torch.Tensor],
epsilon : float = 0.01,
D : int = 1
) -> torch.Tensor :
"""Computes a 1D filter for separable discrete filtering with the
sampled Gaussian kernel.
Note: At very fine scales, the variance of the discrete filter may be much
lower than sigma^2.
"""
vecsize = int((math.ceil(1.0*gaussfiltsize(sigma, epsilon, D))))
x = torch.linspace(-vecsize, vecsize, 2*vecsize+1)
return gauss(x, sigma)
def gauss(
x : torch.Tensor,
sigma : float = 1.0
) -> torch.Tensor :
"""Computes the 1-D Gaussian of a PyTorch tensor representing 1-D x-coordinates.
"""
return 1/(math.sqrt(2*pi)*sigma)*torch.exp(-(x**2/(2*sigma**2)))
def make1Dnormsamplgaussfilter(
sigma : torch.Tensor,
epsilon : float = 0.01,
D : int = 1
) -> torch.Tensor :
"""Computes a 1D filter for separable discrete filtering with the L1-normalized
sampled Gaussian kernel.
Note: At very fine scales, the variance of the discrete filter may be much lower
than sigma^2.
"""
prelfilter = make1Dsamplgaussfilter(sigma, epsilon, D)
return prelfilter/torch.sum(prelfilter)
def make1Dintgaussfilter(
sigma : torch.Tensor,
epsilon : float = 0.01,
D : int = 1
) -> torch.Tensor :
"""Computes a 1D filter for separable discrete filtering with the box integrated
Gaussian kernel over each pixel support region, according to Equation (3.89) on
page 97 in Lindeberg (1993) Scale-Space Theory in Computer Vision, Springer.
Note: Adds additional spatial variance 1/12 to the kernel at coarser scales.
"""
vecsize = int((math.ceil(1.0*gaussfiltsize(sigma, epsilon, D))))
x = torch.linspace(-vecsize, vecsize, 2*vecsize+1)
return scaled_erf(x + 0.5, sigma) - scaled_erf(x - 0.5, sigma)
def scaled_erf(
z : torch.Tensor,
sigma : float = 1.0
) -> torch.Tensor :
"""Computes the scaled error function (as depending on a scale parameter sigma)
of a PyTorch tensor representing 1-D x-coordinates.
"""
return 1/2*(1 + torch.erf(z/(math.sqrt(2)*sigma)))
def make1Dlinintgaussfilter(
sigma : torch.Tensor,
epsilon : float = 0.01,
D : int = 1
) -> torch.Tensor :
"""Computes a 1D filter for separable discrete filtering with the linearly
integrated Gaussian kernel over each extended pixel support region.
Note: Adds additional spatial variance 1/6 to the kernel at coarser scales.
"""
vecsize = int((math.ceil(1.0*gaussfiltsize(sigma, epsilon, D))))
x = torch.linspace(-vecsize, vecsize, 2*vecsize+1)
# The following equation is the result of a closed form integration of
# the expression for the filter coefficients in Eq (3.90) on page 97
# in Lindeberg (1993) Scale-Space Theory in Computer Vision, Springer
return x_scaled_erf(x + 1, sigma) - 2*x_scaled_erf(x, sigma) + \
x_scaled_erf(x - 1, sigma) + \
sigma**2 * (gauss(x + 1, sigma) - \
2*gauss(x, sigma) + gauss(x - 1, sigma))
def x_scaled_erf(
x : torch.Tensor,
sigma : float = 1.0
) -> torch.Tensor :
"""Computes the product of the x-coordinate and scaled error function (as depending
on a scale parameter sigma) of a PyTorch tensor representing 1-D x-coordinates.
"""
return x * scaled_erf(x, sigma)
def jet2mask(C0=0.0, Cx=0.0, Cy=0.0, Cxx=0.0, Cxy=0.0, Cyy=0.0, sigma=1.0):
"""Returns a discrete mask for a Gaussian derivative layer according to
Equation (11) in
Lindeberg (2022) "Scale-covariant and scale-invariant Gaussian derivative
networks", Journal of Mathematical Imaging and Vision, 64(3): 223-242.
using variance-based normalization of the Gaussian derivative operators
for scale normalization parameter gamma = 1.
Note: This function is a mere template for how to compute the Gaussian derivative
layer. For efficiency reasons, it may be better to generate the masks as PyTorch
tensors only once and for all in the Gaussian derivative layer, and then combining
those at each new call of a Gaussian derivative layer.
"""
return C0 + sigma*(Cx*dxmask() + Cy*dymask()) + \
sigma**2/2*(Cxx*dxxmask() + Cxy*dxymask() + Cyy*dyymask())
def dxmask():
"""Returns a mask for discrete approximation of the first-order derivative
in the x-direction.
"""
return torch.from_numpy(np.array([[ 0.0, 0.0, 0.0], \
[-0.5, 0.0, +0.5], \
[ 0.0, 0.0, 0.0]])).type(torch.FloatTensor)
def dymask():
"""Returns a mask for discrete approximation of the first-order derivative
in the y-direction.
"""
return torch.from_numpy(np.array([[0.0, +0.5, 0.0], \
[0.0, 0.0, 0.0], \
[0.0, -0.5, 0.0]])).type(torch.FloatTensor)
def dxxmask():
"""Returns a mask for discrete approximation of the second-order derivative
in the x-direction.
"""
return torch.from_numpy(np.array([[0.0, 0.0, 0.0], \
[1.0, -2.0, 1.0], \
[0.0, 0.0, 0.0]])).type(torch.FloatTensor)
def dxymask():
"""Returns a mask for discrete approximation of the mixed second-order
derivative in the x- and y-directions.
"""
return torch.from_numpy(np.array([[-0.25, 0.00, +0.25], \
[ 0.00, 0.00, 0.00], \
[+0.25, 0.00, -0.25]])).type(torch.FloatTensor)
def dyymask():
"""Returns a mask for discrete approximation of the second-order derivative
in the y-direction.
"""
return torch.from_numpy(np.array([[0.0, +1.0, 0.0], \
[0.0, -2.0, 0.0], \
[0.0, +1.0, 0.0]])).type(torch.FloatTensor)
def filtersdev(pytorchfilter : torch.tensor) -> float :
"""Returns the actual spatial standard deviation of a 1-D PyTorch filter
"""
return math.sqrt(variance1D(pytorchfilter.numpy()))
def make1Dgaussderfilter(
order : int,
sigma : Union[float, torch.Tensor],
N : int,
gaussdermethod : str = 'discgaussder'
) -> torch.Tensor :
"""Generates a mask for discrete approximation of a Gaussian derivative
operator of a given order and at a given scale sigma by separable
filtering, using either of the methods:
'samplgaussder' - the sampled Gaussian derivative kernel with variance-based
scale normalization
'intgaussder' - the integrated Gaussian derivative kernel with
variance-based scale normalization
'discgaussder' - discrete derivative approximations applied to the discrete
analogue of the Gaussian kernel
The different discretization methods have the following relative advantages (+)
and disadvantages (-):
'samplgaussder': + no added scale offset in the spatial discretization
- for small values of sigma, the discrete kernel values may sum up
to a value larger than the integral of the corresponding
continuous kernel
- for very small values of sigma, the kernels have a too
narrow shape
'intgaussder': + the discrete kernel values may sum up to a value close to the
L1-norm of the the continuous kernel over an infinite domain
- the box integration introduces a scale offset of 1/12 at
coarser scales
'discgaussder': + the discrete kernels obey discrete scale-space properties
+ the kernels obey an exact cascade smoothing property over
scales
The parameter N should specify the requested truncation bound for
the filter for |x| > N, where N has to be determined in a complementary
manner given some bound epsilon on the truncation error, for the given
order of differentiation and the given scale value sigma.
The scale parameter sigma should be a 0-D PyTorch tensor if sigma is to be
learned. Then, the discrete analogue of the Gaussian kernel cannot, however,
be used, since the modified Bessel functions of integer order, underlying
the implementation of that kernel lack a complete PyTorch interface.
References:
Lindeberg (1990) "Scale-space for discrete signals", IEEE Transactions on
Pattern Analysis and Machine Intelligence, 12(3): 234-254.
Lindeberg (1993) Scale-Space Theory in Computer Vision, Springer.
Lindeberg (2023) "Discrete approximations of Gaussian smoothing and Gaussian
derivatives", arXiv preprint arXiv:2311.11317.
"""
if gaussdermethod == 'samplgaussder':
return make1Dsamplgaussderfilter(order, sigma, N)
if gaussdermethod == 'intgaussder':
return make1Dintgaussderfilter(order, sigma, N)
if gaussdermethod == 'discgaussder':
return make1Ddiscgaussderfilter(order, sigma, N)
raise ValueError(f"Gaussian derivative discretization method \
{gaussdermethod} not implemented")
def make1Dsamplgaussderfilter(
order : int,
sigma : float,
N : int
) -> torch.Tensor :
"""Generates a sampled Gaussian derivative kernel of a given order
and with standard deviation sigma, truncated at the ends at -N and -N.
Note: At very fine scales, the discrete kernel values may sum up
to a value larger than the integral of the corresponding continuous
kernel, and the kernels may have a too narrow shape.
"""
x = torch.linspace(-N, N, 1 + 2*N)
if order == 0:
return gauss0derkernel(x, sigma)
if order == 1:
return gauss1derkernel(x, sigma)
if order == 2:
return gauss2derkernel(x, sigma)
if order == 3:
return gauss3derkernel(x, sigma)
if order == 4:
return gauss4derkernel(x, sigma)
raise ValueError(f"Not implemented for order {order}")
def gauss0derkernel(x : np.ndarray, sigma : float = 1.0) -> torch.Tensor :
"""Computes a Gaussian function, given a set of spatial x-coordinates and
sigma value specifying the standard deviation of the kernel.
"""
return 1 / (math.sqrt( 2 * pi) * sigma) \
* torch.exp(-(x**2 / (2 * sigma**2)))
def gauss1derkernel(x : np.ndarray, sigma : float = 1.0) -> torch.Tensor :
"""Computes a first-order derivative of a Gaussian function given a set of spatial
x-coordinates and sigma value specifying the standard deviation of the kernel.
"""
return (-x / sigma**2) / (math.sqrt(2 * pi) * sigma) \
* torch.exp(-(x**2 / (2 * sigma**2)))
def gauss2derkernel(x : np.ndarray, sigma : float = 1.0) -> torch.Tensor :
"""Computes a second-order derivative of a Gaussian function given a set of spatial
x-coordinates and sigma value specifying the standard deviation of the kernel.
"""
return ((x**2 - sigma**2) / sigma**4) / \
(math.sqrt(2 * pi) * sigma) * torch.exp(-(x**2 / (2 * sigma**2)))
def gauss3derkernel(x : np.ndarray, sigma : float = 1.0) -> torch.Tensor :
"""Computes a third-order derivative of a Gaussian function given a set of spatial
x-coordinates and sigma value specifying the standard deviation of the kernel.
"""
return (-(x**3 - 3 * sigma**2 * x) / sigma**6) / \
(math.sqrt(2 * pi) * sigma) * torch.exp(-(x**2 / (2 * sigma**2)))
def gauss4derkernel(x : np.ndarray, sigma : float = 1.0) -> torch.Tensor :
"""Computes a fourth-order derivative of a Gaussian function given a set of spatial
x-coordinates and sigma value specifying the standard deviation of the kernel.
"""
return ((x**4 - 6 * sigma**2 * x**2 + 3 * sigma**4) / sigma**8) / \
(math.sqrt(2 * pi) * sigma) * torch.exp(-(x**2 / (2 * sigma**2)))
def make1Dintgaussderfilter(
order : int,
sigma : Union[float, torch.Tensor],
N : int
) -> torch.Tensor :
"""Generates an integrated Gaussian derivative kernel of a given order and with
standard deviation sigma, truncated at the ends at -N and -N.
The integrated Gaussian derivative kernel is defined by integrating the
corresponding continuous Gaussian derivative kernel over the support region
of each pixel.
Note: At coarser scales, the box integration over each pixel support
regions adds a scale offset to the kernel.
"""
x = torch.linspace(-N, N, 1 + 2*N)
if order == 0:
return scaled_erf(x + 0.5, sigma) - scaled_erf(x - 0.5, sigma)
if order == 1:
return gauss0derkernel(x + 0.5, sigma) - gauss0derkernel(x - 0.5, sigma)
if order == 2:
return gauss1derkernel(x + 0.5, sigma) - gauss1derkernel(x - 0.5, sigma)
if order == 3:
return gauss2derkernel(x + 0.5, sigma) - gauss2derkernel(x - 0.5, sigma)
if order == 4:
return gauss3derkernel(x + 0.5, sigma) - gauss3derkernel(x - 0.5, sigma)
raise ValueError(f"Not implemented for order {order}")
def makesamplaffgausskernel(
sigma1 : float,
sigma2 : float,
phi : float,
N : int
) -> np.ndarray :
"""Computes a sampled affine Gaussian kernel of size N x N defined as
g(x; Sigma) = 1/(2 * pi * det Sigma) * exp(-x^T Sigma*(-1) x/2)
with the covariance matrix
Sigma = [[Cxx, Cxy],
[Cxy, Cyy]]
parameterized as
Cxx = sigma1^2 * cos(phi)^2 + sigma2^2 * sin(phi)^2
Cxy = (sigma1^2 - sigma2^2) * cos(phi) * sin(phi)
Cyy = sigma1^2 * sin(phi)^2 + sigma2^2 * cos(phi)^2
References:
Lindeberg (1993) Scale-Space Theory in Computer Vision, Springer.
Lindeberg and Garding (1997) "Shape-adapted smoothing in estimation
of 3-D depth cues from affine distortions of local 2-D structure",
Image and Vision Computing 15:415-434
"""
return torch.from_numpy(samplaffgausskernel(sigma1, sigma2, phi, N)).type(torch.FloatTensor)
def makescnormaffdirdermask(
sigma1 : float,
sigma2 : float,
phi : float,
phiorder : int,
orthorder : int
) -> np.ndarray :
"""Returns a discrete directional derivative approximation mask, such that
application of this mask to an image that has been smoothed with a zero-order
affine Gaussian kernel gives an approximation of the scale-normalized
directional derivative response to the receptive field
sigma1^phiorder sigma2^orthorder D_phi^phiorder D_orth^orthorder g(x; Sigma)
for
D_phi = cos phi D_x + sin phi D_y
D_orth = -sin phi D_x + cos phi D_y
where D_phi and D_orth represent the partial derivative operators in the
directions phi and its orthogonal direction orth, respectively, where it
is assumed that convolution with g(x; Sigma), (with the covariance matrix
Sigma specified using the same values of sigma1, sigma2 and phi), is
computed outside of this function).
The intention is that the mask returned by this function should be applied
to affine Gaussian smoothed images. Specifically, for an image processing
method that makes use of a filter bank of directional derivatives of
affine Gaussian kernels, the intention is that the computationally heavy
affine Gaussian smoothing operation should be performed only once, and
that different directional derivative approximation masks should then
be applied to the same affine Gaussian smoothed image, thus saving
a substantial amount of work, compared to applying full size affine
Gaussian directional derivative masks for different choices of orders
for the directional derivatives.
Reference:
Lindeberg (2021) "Normative theory of visual receptive fields",
Heliyon 7(1): e05897: 1-20. (See Equation (31)).
"""
return torch.from_numpy( \
scnormaffdirdermask(sigma1, sigma2, phi, phiorder, orthorder)).type(torch.FloatTensor)