deeprobust.image package¶
Subpackages¶
- deeprobust.image.attack package
- Submodules
- deeprobust.image.attack.BPDA module
- deeprobust.image.attack.Nattack module
- deeprobust.image.attack.Universal module
- deeprobust.image.attack.YOPOpgd module
- deeprobust.image.attack.base_attack module
- deeprobust.image.attack.cw module
- deeprobust.image.attack.deepfool module
- deeprobust.image.attack.fgsm module
- deeprobust.image.attack.l2_attack module
- deeprobust.image.attack.lbfgs module
- deeprobust.image.attack.onepixel module
- deeprobust.image.attack.pgd module
- Module contents
- deeprobust.image.defense package
- Submodules
- deeprobust.image.defense.LIDclassifier module
- deeprobust.image.defense.TherEncoding module
- deeprobust.image.defense.YOPO module
- deeprobust.image.defense.base_defense module
- deeprobust.image.defense.fast module
- deeprobust.image.defense.fgsmtraining module
- deeprobust.image.defense.pgdtraining module
- deeprobust.image.defense.trades module
- Module contents
- deeprobust.image.netmodels package
- Submodules
- deeprobust.image.netmodels.CNN module
- deeprobust.image.netmodels.CNN_multilayer module
- deeprobust.image.netmodels.YOPOCNN module
- deeprobust.image.netmodels.densenet module
- deeprobust.image.netmodels.preact_resnet module
- deeprobust.image.netmodels.resnet module
- deeprobust.image.netmodels.train_model module
- deeprobust.image.netmodels.train_resnet module
- deeprobust.image.netmodels.vgg module
- Module contents
Submodules¶
deeprobust.image.config module¶
deeprobust.image.evaluation_attack module¶
deeprobust.image.optimizer module¶
This module include the following optimizer: 1. differential_evolution: The differential evolution global optimization algorithm https://github.com/scipy/scipy/blob/70e61dee181de23fdd8d893eaa9491100e2218d7/scipy/optimize/_differentialevolution.py
modified by: https://github.com/DebangLi/one-pixel-attack-pytorch/blob/master/differential_evolution.py
- Basic Adam Optimizer
-
differential_evolution
(func, bounds, args=(), strategy='best1bin', maxiter=1000, popsize=15, tol=0.01, mutation=(0.5, 1), recombination=0.7, seed=None, callback=None, disp=False, polish=True, init='latinhypercube', atol=0)[source]¶ Finds the global minimum of a multivariate function. Differential Evolution is stochastic in nature (does not use gradient methods) to find the minimium, and can search large areas of candidate space, but often requires larger numbers of function evaluations than conventional gradient based techniques. The algorithm is due to Storn and Price [1]. :param func: The objective function to be minimized. Must be in the form
f(x, *args)
, wherex
is the argument in the form of a 1-D array andargs
is a tuple of any additional fixed parameters needed to completely specify the function.Parameters: - bounds (sequence) – Bounds for variables.
(min, max)
pairs for each element inx
, defining the lower and upper bounds for the optimizing argument of func. It is required to havelen(bounds) == len(x)
.len(bounds)
is used to determine the number of parameters inx
. - args (tuple, optional) – Any additional fixed parameters needed to completely specify the objective function.
- strategy (str, optional) –
- The differential evolution strategy to use. Should be one of:
- ’best1bin’
- ’best1exp’
- ’rand1exp’
- ’randtobest1exp’
- ’currenttobest1exp’
- ’best2exp’
- ’rand2exp’
- ’randtobest1bin’
- ’currenttobest1bin’
- ’best2bin’
- ’rand2bin’
- ’rand1bin’
The default is ‘best1bin’.
- maxiter (int, optional) – The maximum number of generations over which the entire population is
evolved. The maximum number of function evaluations (with no polishing)
is:
(maxiter + 1) * popsize * len(x)
- popsize (int, optional) – A multiplier for setting the total population size. The population has
popsize * len(x)
individuals (unless the initial population is supplied via the init keyword). - tol (float, optional) – Relative tolerance for convergence, the solving stops when
np.std(pop) <= atol + tol * np.abs(np.mean(population_energies))
, where and atol and tol are the absolute and relative tolerance respectively. - mutation (float or tuple(float, float), optional) – The mutation constant. In the literature this is also known as
differential weight, being denoted by F.
If specified as a float it should be in the range [0, 2].
If specified as a tuple
(min, max)
dithering is employed. Dithering randomly changes the mutation constant on a generation by generation basis. The mutation constant for that generation is taken fromU[min, max)
. Dithering can help speed convergence significantly. Increasing the mutation constant increases the search radius, but will slow down convergence. - recombination (float, optional) – The recombination constant, should be in the range [0, 1]. In the literature this is also known as the crossover probability, being denoted by CR. Increasing this value allows a larger number of mutants to progress into the next generation, but at the risk of population stability.
- seed (int or np.random.RandomState, optional) – If seed is not specified the np.RandomState singleton is used. If seed is an int, a new np.random.RandomState instance is used, seeded with seed. If seed is already a np.random.RandomState instance, then that np.random.RandomState instance is used. Specify seed for repeatable minimizations.
- disp (bool, optional) – Display status messages
- callback (callable, callback(xk, convergence=val), optional) – A function to follow the progress of the minimization.
xk
is the current value ofx0
.val
represents the fractional value of the population convergence. Whenval
is greater than one the function halts. If callback returns True, then the minimization is halted (any polishing is still carried out). - polish (bool, optional) – If True (default), then scipy.optimize.minimize with the L-BFGS-B method is used to polish the best population member at the end, which can improve the minimization slightly.
- init (str or array-like, optional) –
Specify which type of population initialization is performed. Should be one of:
- ’latinhypercube’
- ’random’
- array specifying the initial population. The array should have
shape
(M, len(x))
, where len(x) is the number of parameters. init is clipped to bounds before use.
The default is ‘latinhypercube’. Latin Hypercube sampling tries to maximize coverage of the available parameter space. ‘random’ initializes the population randomly - this has the drawback that clustering can occur, preventing the whole of parameter space being covered. Use of an array to specify a population subset could be used, for example, to create a tight bunch of initial guesses in an location where the solution is known to exist, thereby reducing time for convergence.
- atol (float, optional) – Absolute tolerance for convergence, the solving stops when
np.std(pop) <= atol + tol * np.abs(np.mean(population_energies))
, where and atol and tol are the absolute and relative tolerance respectively.
Returns: res – The optimization result represented as a OptimizeResult object. Important attributes are:
x
the solution array,success
a Boolean flag indicating if the optimizer exited successfully andmessage
which describes the cause of the termination. See OptimizeResult for a description of other attributes. If polish was employed, and a lower minimum was obtained by the polishing, then OptimizeResult also contains thejac
attribute.Return type: OptimizeResult
Notes
Differential evolution is a stochastic population based method that is useful for global optimization problems. At each pass through the population the algorithm mutates each candidate solution by mixing with other candidate solutions to create a trial candidate. There are several strategies [2] for creating trial candidates, which suit some problems more than others. The ‘best1bin’ strategy is a good starting point for many systems. In this strategy two members of the population are randomly chosen. Their difference is used to mutate the best member (the best in best1bin), \(b_0\), so far: .. math:
b' = b_0 + mutation * (population[rand0] - population[rand1])
A trial vector is then constructed. Starting with a randomly chosen ‘i’th parameter the trial is sequentially filled (in modulo) with parameters from b’ or the original candidate. The choice of whether to use b’ or the original candidate is made with a binomial distribution (the ‘bin’ in ‘best1bin’) - a random number in [0, 1) is generated. If this number is less than the recombination constant then the parameter is loaded from b’, otherwise it is loaded from the original candidate. The final parameter is always loaded from b’. Once the trial candidate is built its fitness is assessed. If the trial is better than the original candidate then it takes its place. If it is also better than the best overall candidate it also replaces that. To improve your chances of finding a global minimum use higher popsize values, with higher mutation and (dithering), but lower recombination values. This has the effect of widening the search radius, but slowing convergence. .. versionadded:: 0.15.0
References
[1] Storn, R and Price, K, Differential Evolution - a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, Journal of Global Optimization, 1997, 11, 341 - 359. [2] http://www1.icsi.berkeley.edu/~storn/code.html [3] http://en.wikipedia.org/wiki/Differential_evolution - bounds (sequence) – Bounds for variables.
deeprobust.image.utils module¶
-
onehot_like
(a, index, value=1)[source]¶ Creates an array like a, with all values set to 0 except one. :param a: The returned one-hot array will have the same shape
and dtype as this arrayParameters: - index (int) – The index that should be set to value
- value (single value compatible with a.dtype) – The value to set at the given index
Returns: One-hot array with the given value at the given location and zeros everywhere else.
Return type: numpy.ndarray