Scippy

SCIP

Solving Constraint Integer Programs

Frequently Asked Questions (FAQ)

General Questions about SCIP

  1. SCIP is a solver for Mixed Integer Linear and Nonlinear Problems that allows for an easy integration of arbitrary constraints. It can be used as a framework for branch-cut-and-price and contains all necessary plugins to serve as a standalone solver for MIP and MINLP.

    • You can use the precompiled binaries to solve MIPs and MINLPs. Learn more about the file formats supported by SCIP, and get an overview of the supported problem classes and additional recommendations for solving them.
    • You can use SCIP as a subroutine for solving MINLPs and more general constraint integer programs from your own source code.
    • You can use SCIP as a framework in which you implement your own plugins.
    • You can use SCIP in any combination of the three purposes above.

    This FAQ contains separate sections covering each of these usages of SCIP. It further considers specific questions for some features.

  2. If you are either looking for a fast non-commercial MIP/MINLP-solver or for a branch-cut-and-price-framework in which you can directly implement your own methods and allows full control of the solving process.

  3. As long as you use it for academic, non-commercial purposes: No. This will not change. For the other cases, check the explanation of the ZIB academic license and always feel free to ask us. If you want to use SCIP commercially, please write an e-mail to koch@zib.de.

  4. An easy way is to use the SCIP-binaries and call SCIP from a shell, see here for a tutorial. For that, you just have to download one of the precompiled binaries from the download section, or the zipped source code and compile it with your favorite settings. This is described in detail in the INSTALL file in the SCIP main directory.

    Another way is to use SCIP as a solver integrated into your own program source code. See the directories "examples/MIPsolver/" and "examples/Queens/" for simple examples and this point.

    A third way is to implement your own plugins into SCIP. This is explained in the HowTos for all plugin types, which you can find in the doxygen documentation. See also How to start a new project.

  5. Unless you want to use SCIP as a pure CP-Solver (see here), you need an underlying LP-Solver installed and linked to the libraries (see the INSTALL file in the SCIP root directory). LP-solvers currently supported by SCIP are:

    We also provide some precompiled binaries. Besides that, you might need a modeling language like ZIMPL to generate *.mps or *.lp files. ZIMPL files can also directly be read by SCIP. You can download a package which includes SCIP, SoPlex and ZIMPL here.

    If you want to use SCIP for mixed integer nonlinear programming, you might want to use an underlying NLP solver (e.g., Ipopt). SCIP already comes with the CppAD expression interpreter (e.g., CppAD) as part of the source code.

  6. Read the INSTALL file in the SCIP root directory. It contains hints of how to get around problems. You can also try the binaries available on the SCIP page.
    If you want to use SoPlex as the underlying LP-solver, you can try the following:

    First, download the SCIP Optimization Suite. Then, extract the file, change into the scipoptsuite directory, and enter 'make'. As long as you have all the necessary libraries installed in your system, it should generate a SCIP binary linked to ZIMPL and Soplex. The necessary system libraries are:

    1. ZLIB (libz.a)
    2. GMP (libgmp.a)
    3. Readline (libreadline.a)

    If you do not have all of these libraries, read the INSTALL file in the SCIP Optimization Suite directory.

    As a summary, the call of make ZIMPL=false ZLIB=false READLINE=false should work on most systems. If you have any problems while using an LP-solver different from SoPlex, please read the SCIP INSTALL file first.

    If you encounter compilation problems dealing with the explicit keyword you can either try a newer compiler or set the flags LEGACY=true for SoPlex and SPX_LEGACY=true for SCIP.

  7. Compile SCIP in debug mode: make OPT=dbg. Put the binary into a debugger, e.g., gdb and let it run again. If you get an impression which component is causing the trouble, set #define SCIP_DEBUG as the first line of the corresponding *.c file, recompile and let it run again. This will print debug messages from that piece of code. Find a short debugging tutorial here.

  8. See above. Often, the asserts that show up in debug mode already help to clarify misunderstandings and suggest fixes, if you were calling SCIP functions in an unexpected manner. For sending bug reports, please see our Contact information from which you can directly access an online form for reporting bugs.

  9. For an explanation of the naming see the coding style guidelines.
    The Public C-API of SCIP is separated into a Core API provided by the header scip.h and a default plugin API provided by scipdefplugins.h. The large API is structured into topics for a better overview.

  10. Yes. SCIP can be used as a pure CP/SAT Solver by typing set emphasis cpsolver in the shell or by using the function SCIPsetEmphasis(). Furthermore, you can compile SCIP without any LP-Solver by make LPS=none. See here for more on changing the behavior of SCIP.

  11. Since LPs are only special types of MIPs and CIPs, the principal answer is yes. If you feed a pure LP to SCIP, it will first apply presolving and then hand this presolved problem to the underlying LP solver. If the LP is solved to optimality, you can query the optimal solution values as always. You can also access the values of an optimal dual solution by using display dualsolution.

    However, there are certain limitations to this: Reduced costs are not accessible. If the LP turns out to be infeasible, you cannot currently obtain a Farkas proof. And recall that this approach is only meaningful if the problem is an LP (no integer variables, only linear constraints).

    Hence, if you need more, "LP specific", information than the primal solution, you are better off using an LP-Solver directly. If you are using the SCIP Optimization Suite, you could, e.g., use the included LP solver SoPlex. If you want to solve an LP not from the command line, but within your C/C++ program, you could also use SCIP's LP-Interface, see also here.

  12. SCIP supports nonlinear constraints of the form lhs ≤ f(x) ≤ rhs, where the function f(x) is an algebraic expression that can be represented as expression tree. Such an expression tree has constants and variables as terminal nodes and operands as non-terminal nodes. Expression operands supported by SCIP include addition, subtraction, multiplication, division, exponentiation and logarithm. Trigonometric functions are not yet supported by SCIP.

    Nonlinear objective functions are not supported by SCIP and must be modeled as constraint function. Note, that the support for non-quadratic nonlinear constraints is not yet as robust as the rest of SCIP. Missing bounds on nonlinear variables and tiny or huge coefficients can easily lead to numerical problems, which can be avoided by careful modeling.

  13. SCIP can be compiled using either Makefiles or CMake. It's recommended to use the new CMake build system both for new and long-time users of SCIP. Consult the CMake documentation for further information about the changes introduced in the new system (like Linux-conform naming conventions for libraries). We still support the traditional Makefile system for backwards compatibility but it might be discontinued at some point.

  14. When SCIP builds the binary, it needs to link with the corresponding libraries, i.e., its own libraries and those of an LP-solver. There are usually two ways to distribute a library (on UNIX systems). In the first (with suffix ".a"), the library is linked statically to SCIP; this means that all information is packed into the binary. In the second way (with suffix ".so"), the library is a shared library. In this case, the code of the library is not inserted into the binary itself, but is loaded at runtime. This has the advantage that the binaries are smaller, but it comes at the cost that you have to make sure that the library is found at runtime. (SCIP adds rpath information containing the path to the shared libraries to the binary; this usually allows to find these libraries at runtime. If this does not work, for most systems it suffices to put the path of the library into the LD_LIBRARY_PATH environment variable).

    There are compiler-dependent preferences which version should be included if both a static and a shared version of the same library are available. In order to avoid confusion, SCIP separates shared from static libraries into the directories "lib/static" and "lib/shared" when using the Makefile system. Note that some LP-solvers are only shipped with a shared version.

  15. You can use the SHARED=true option when making SCIP. This will generate the libraries of SCIP in shared format. The binary then also uses this form. Note that the path to the lib/shared directory of SCIP is used to locate the libraries. If you want to move the libraries, you might need to set the LD_LIBRARY_PATH environment variable to include the new path. If you are using your own build system: The "magic" changes are the -fPIC compiler/linker option and the -Wl,-rpath option.

  16. In fact, there is a slight difference: SCIPvarGetSol() is also able to return pseudo solution values. If you do not have an idea, what pseudo solutions are, SCIPgetVarSol() should be just fine. This should be the only case of 'duplicate methods'. If you find, however, another one, please contact us.

  17. Yes, currently there are three tree visualizations supported, the vbctool, HyDraw, and GrUMPy. The first comes with a viewer that has an option to uncover the nodes one-by-one (each time you hit the space key). Additional node information such as its lower bound, depth, and number are accessed through a context menu. HyDraw can display a live visualization of the tree using JavaView. GrUMPy is a Python tool that allows to create noninteractive visualizations of the tree in various formats.

    For using one of these tools, SCIP lets you define file names set visual vbcfilename somefilename.vbc and set visual bakfilename somefilename.dat. GrUMPy uses BAK files while the other two tools parse vbc output.

    For those who want to use the step-by-step functionality of vbctool, it is necessary to use a time-step counter for the visualization instead of the real time. The corresponding parameter is changed via set visual realtime FALSE.

    For users of the callable library, the corresponding parameters are called "visual/bakfilename", "visual/vbcfilename", and "visual/realtime".

  18. By default, the SCIP output contains the display column "gap", which is computed as follows: If primal and dual bound have opposite signs, the gap is "Infinity". If primal and dual bound have the same sign, the gap is |primalbound - dualbound|/min(|primalbound|,|dualbound|)| (see SCIPgetGap()). This definition has the advantage, that the gap decreases monotonously during the solving process.

    An alternative definition of the gap is (dual bound - primal bound) / abs(primal bound). Using this definiton, a finite gap is computed as soon as a dual and primal bound have been found. However, the gap may increase during the solving process, if both bounds have opposite signs in the beginning. In SCIP, this definition can be included in the output by enabling the display column "primalgap": display/primalgap/active = 2

  19. The bliss library can be compiled with or without GMP support. If bliss is compiled with GMP the macro defininiton BLISS_USE_GMP must be added, otherwise the headers do not match the library which produces the crash. If bliss is compiled as shared library, the CMake system should be able to detect this automatically. In case you do not need symmetry handling, the easiest way to resolve the problem is to just disable symmetry handling during compilation (see Makefiles or CMake) or with the SCIP parameter misc/usesymmetry = 0. Otherwise it can be resolved by either adding the macro definition with the compile flag -DBLISS_USE_GMP or by compiling bliss differently, i.e. without GMP support.


Using SCIP as a standalone solver

  1. In the interactive shell you can set the width of the output with the following command set display width followed by an appropriate number. See also the next question.

  2. Type display display in the interactive shell to get an explanation of them.
    By the way: If a letter appears in front of a display row, it indicates, which heuristic found the new primal bound, a star representing an integral LP-relaxation.
    Typing display statistics after finishing or interrupting the solving process gives you plenty of extra information about the solving process.
    (Typing display heuristics gives you a list of the heuristics including their letters.)

  3. SCIP comes with default settings that are automatically active when you start the interactive shell. However, you have the possibility to save customized settings via the set save and set diffsave commands. Both commands will prompt you to enter a file name and save either all or customized parameters only to the specified file. A user parameter file that you save as "scip.set" has a special meaning; Whenever you invoke SCIP from a directory containing a file named "scip.set", the settings therein overwrite the default settings. For more information about customized settings, see the Tutorial on the interactive shell. Settings files can become incompatible with later releases if we decide to rename/delete a parameter. Information about this can be found in the CHANGELOG for every release, see also this related question.

  4. You can switch the settings for all presolving, heuristics, and separation plugins to three different modes via the set {presolving, heuristics, separation} emphasis parameters in the interactive shell. off turns off the respective type of plugins, fast chooses settings that lead to less time spent in this type of plugins, decreasing their impact, and aggressive increases the impact of this type of plugins. You can combine these general settings for cuts, presolving, and heuristics arbitrarily.
    display parameters shows you which settings currently differ from their default, set default resets them all. Furthermore, there are complete settings that can be set by set emphasis, i.e. settings for pure feasibility problems, solution counting, and CP like search.

  5. You can look at the statistics (type display statistics in the interactive shell or call SCIPprintStatistics() when using SCIP as a library). This way you can see, which of the presolvers, propagators or constraint handlers performed the reductions. Then, add a #define SCIP_DEBUG as first line of the corresponding *.c file in src/scip (e.g. cons_linear.c or presol_probing.c) Recompile and run again. You will get heaps of information now. Looking into the code and documentation of the corresponding plugin and ressources from the literature helps with the investigation.

    • For using a non-default branching rule or node selection strategy as standard, you just have to give it the highest priority, using
      • SCIP> set branching <name of a branching rule> priority 9999999
      • SCIP> set nodeselectors <name of a node selector> priority 9999999
      With the commands
      • SCIP> display branching
      • SCIP> display nodeselectors
      you get a list of all branching rules and node selectors, respectively. These lists give information about the different priorities.
    • If you want to completely disable a heuristic or a separator you have to set its frequency to -1 and the sepafreq to -1 for separation by constraint handlers, respectively. The commands looks like this:
      • SCIP> set heuristics <name of a heuristic> freq -1
      • SCIP> set separators <name of a separator> freq -1
      • SCIP> set constraints <name of a constraint handler> sepafreq -1
    • For disabling a presolver, you have to set its maxrounds parameter to 0.
      • SCIP> set presolvers <name of a presolver> maxrounds 0
    • If you want to intensify the usage of a heuristic, you can reduce its frequency to some smaller, positive value, and/or raise the quotient and offset values (maxlpiterquot for diving heuristics, nodes for LNS heuristics).
      • SCIP> set heuristics <name of a heuristic> freq <some value>
      • SCIP> set heuristic <name of a diving heuristic> maxlpiterquot <some value>
      • SCIP> set heuristic <name of a LNS heuristic> nodesquot <some value>
    • For intensifying the usage of a separator, you can raise its maxroundsroot and maxsepacutsroot values.
      • SCIP> set separators <name of a separator> maxroundsroot <some value>
      • SCIP> set separators <name of a separator> maxrounds <some value>
      If you also want to use this separator locally, you have to set its frequency to a positive value and possibly raise maxrounds and maxsepacuts.
      • SCIP> set separators <name of a separator> freq <some value>
      • SCIP> set separators <name of a separator> maxsepacuts <some value>
      Compare the parameters of the heuristic/separator in the appropriate aggressive setting
    • For weakening, you should just do the opposite operation, i.e., reducing the values you would raise for intensification and vice versa.
  6. If you want to keep the interactive shell functionality, you could add a dialog handler, that introduces a new SCIP shell command that

    1. solves the problem and calls your function afterwards or
    2. checks whether the stage is SOLVED and only calls your function.

    Search SCIPdialogExecOptimize in src/scip/dialog_default.c to see how the functionality of the "optimize" command is invoked. Also, in src/scip/cons_countsols.c, you can see an example of a dialog handler being added to SCIP. If this is the way you go, please check the How to add dialogs section of the doxygen documentation.

  7. Please consult this overview on the problem classes supported by SCIP and the recommendations and links for MINLPs therein.


Using SCIP included in another source code

  1. For starters, SCIP comes with complete examples in source code that illustrate the problem creation process. Please refer to the examples of the Callable Library section in the Example Documentation of SCIP.

    First you have to create a SCIP object via SCIPcreate(), then you start to build the problem via SCIPcreateProb(). Then you create variables via SCIPcreateVar() and add them to the problem via SCIPaddVar().

    The same has to be done for the constraints. For example, if you want to fill in the rows of a general MIP, you have to call SCIPcreateConsLinear(), SCIPaddConsLinear() and additionally SCIPreleaseCons() after finishing. If all variables and constraints are present, you can initiate the solution process via SCIPsolve().

    Make sure to also call SCIPreleaseVar() if you do not need the variable pointer anymore. For an explanation of creating and releasing objects, please see the notes on releasing objects.

  2. First you have to build your problem (at least all variables have to exist), then there are several different ways:

    • You have the solution in file which fits the solution format of SCIP, then you can use SCIPreadSol() to pass that solution to SCIP.
    • You create a new SCIP primal solution candidate by calling SCIPcreateSol() and set all nonzero values by calling SCIPsetSolVal(). After that, you add this solution by calling SCIPaddSol() (the variable stored should be true afterwards, if your solution was added to solution candidate store) and then release it by calling SCIPsolFree(). Instead of adding and releasing sequentially, you can use SCIPaddSolFree() which tries to add the solution to the candidate store and free the solution afterwards.
    • Since SCIP 4.0.0, there is the possibility to create partial solutions via SCIPcreatePartialSol(). A solution is partial if not all solution values are known before the solve. After creation, all solution values are unknown unless explicitly given via SCIPsetSolVal(). In contrast, solutions created via SCIPcreateSol() implicitly assume a solution value of zero. A typical example for problem involving integer and continuous variables is a tentative assignment for all integer variables so that values for the continuous variables should be determined through the solver. After starting the solving process, SCIP will try to heuristically complete all partial solutions that were added during problem creation.

  3. There are fourteen different stages during a run of SCIP. There are some methods which cannot be called in all stages, consider for example SCIPtrySol() (see previous question).

  4. Before the solving process starts, the original problem is copied. This copy is called "transformed problem", and all modifications during the presolving and solving process are only applied to the transformed problem.
    This has two main advantages: first, the user can also modify the problem after partially solving it. All modifications done by SCIP (presolving, cuts, variable fixings) during the partial solving process will be deleted together with the transformed problem, the user can modify the original problem and restart solving. Second, the feasibility of solutions is always tested on the original problem!

  5. This can have several reasons. Especially names of binary variables can get different prefixes and suffixes. Each transformed variable and constraint (see here) gets a "t_" as prefix. Apart from that, the meaning of original and transformed variables and constraints is identical.

    General integers with bounds that differ just by 1 will be aggregated to binary variables which get the same name with the suffix "_bin" . E.g. an integer variable t_x with lower bound 4 and upper bound 5 will be aggregated to a binary variable t_x_bin = t_x - 4.

    Variables can have negated counterparts, e.g. for a binary t_x its (also binary) negated would be t_x_neg = 1 - t_x.

    The knapsack constraint handler is able to disaggregate its constraints to cliques, which are set packing constraints, and create names that consist of the knapsack's name and a suffix "_clq_<int>". E.g., a knapsack constraint knap: x_1 + x2 +2 x_3 ≤ 2 could be disaggregated to the set packing constraints knap_clq_1: x_1 + x_3 ≤ 1 and knap_clq_2: x_2 + x_3 ≤ 1.

  6. Yes, you do. SCIP_CALL() is a global define, which handles the return codes of all methods which return a SCIP_RETCODE and should therefore parenthesize each such method. SCIP_OKAY is the code which is returned if everything worked well; there are 17 different error codes, see type_retcode.h. Each method that calls methods which return a SCIP_RETCODE should itself return a SCIP_RETCODE. If this is not possible, use SCIP_CALL_ABORT() to catch the return codes of the methods. If you do not want to use this either, you have to do the exception handling (i.e. the case that the return code is not SCIP_OKAY) on your own.

  7. Limits are given by parameters in SCIP, for example limits/time for a time limit or limits/nodes for a node limit. If you want to set a limit, you have to change these parameters. For example, for setting the time limit to one hour, you have to call SCIP_CALL( SCIPsetRealParam(scip, "limits/time", 3600) ). In the interactive shell, you just enter set limits time 3600. For more examples, please have a look into heur_rens.c.


Using SCIP as a Branch-Cut-And-Price-Framework

  1. See the doxygen documentation for a list of plugin types. There is a HowTo for each of them.

  2. This depends on whether you want to add constraints or only cutting planes. The main difference is that constraints can be "model constraints", while cutting planes are only additional LP rows that strengthen the LP relaxation. A model constraint is a constraint that is important for the feasibility of the integral solutions. If you delete a model constraint, some infeasible integral vectors would suddenly become feasible in the reduced model. A cutting plane is redundant w.r.t. integral solutions. The set of feasible integral vectors does not change if a cutting plane is removed. You can, however, relax this condition slightly and add cutting planes that do cut off feasible solutions, as long as at least one of the optimal solutions remains feasible.

    You want to use a constraint handler in the following cases:

    1. Some of your feasibility conditions can not be expressed by existing constraint types (e.g., linear constraints), or you would need too many of them. For example, the "nosubtour" constraint in the TSP is equivalent to exponentially many linear constraints. Therefore, it is better to implement a "nosubtour" constraint handler that can inspect solutions for subtours and generate subtour elimination cuts and others (e.g., comb inequalities) to strengthen the LP relaxation.
    2. Although you can express your feasibility condition by a reasonable number of existing constraint types, you can represent and process the condition in a more efficient way. For example, it may be that you can, due to your structural knowledge, implement a stronger or faster domain propagation or find tighter cutting planes than what one could do with the sum of the individual "simple" constraints that model the feasibility condition.

    You want to use a cutting plane separator in the following cases:

    1. You have a general purpose cutting plane procedure that can be applied to any MIP. It does not use problem specific knowledge. It only looks at the LP, the integrality conditions, and other deduced information like the implication graph.
    2. You can describe your feasibility condition by a set C of constraints of existing type (e.g., linear constraints). The cuts you want to separate are model specific, but apart from these cuts, there is nothing you can gain by substituting the set C of constraints with a special purpose constraint. For example, the preprocessing and the domain propagation methods for the special purpose constraint would do basically the same as what the existing constraint handler does with the set C of constraints. In this case, you don't need to implement the more complex constraint handler. You add constraints of existing type to your problem instance in order to produce a valid model, and you enrich the model by your problem specific cutting plane separator to make the solving process faster. You can easily evaluate the performance impact of your cutting planes by enabling and disabling the separator.

    Note that a constraint handler is defined by the type of constraints that it manages. For constraint handlers, always think in terms of constraint programming. For example, the "nosubtour" constraint handler in the TSP example (see "ConshdlrSubtour.cpp" in the directory "scip/examples/TSP/src/") manages "nosubtour" constraints, which demand that in a given graph no feasible solution can contain a tour that does not contain all cities. In the usual TSP problem, there is only one "nosubtour" constraint, because there is only one graph for which subtours have to be ruled out. The "nosubtour" constraint handler has various ways of enforcing the "nosubtour" property of the solutions. A simple way is to just check each integral solution candidate (in the CONSCHECK, CONSENFOLP, and CONSENFOPS callback methods) for subtours. If there is a subtour, the solution is rejected. A more elaborate way includes the generation of "subtour elimination cuts" in the CONSSEPALP callback method of the constraint handler. Additionally, the constraint handler may want to separate other types of cutting planes like comb inequalities in its CONSSEPALP callback.

  3. Setting the status of a display column to 0 turns it off. E.g., type set display memused status 0 in the interactive shell to disable the memory information column, or include the line SCIPsetIntParam(scip, "display/memused/status", 0) into your source code. Adding your own display column can be done by calling the SCIPincludeDisp() method, see the doxygen documentation.
    The statistic display, which is shown by display statistics and SCIPprintStatistics(), respectively, cannot be changed.

  4. Each row is of the form lhs ≤ Σ(val[jcol[j]) + constrhs. For now, val[jcol[j] can be interpreted as aij·xj (for the difference between columns and variables see here). The constant is essentially needed for collecting the influence of presolving reductions like variable fixings and aggregations.
    The lhs and rhs may take infinite values: a less-than inequality would have lhs = -∞, and a greater-than inequality would have rhs = +∞. For equations lhs is equal to rhs. An infinite left hand side can be recognized by SCIPisInfinity(scip, -lhs), an infinite right hand side can be recognized by SCIPisInfinity(scip, rhs).

  5. You can get all rows in the current LP-relaxation by calling SCIPgetLPRowsData(). The methods SCIProwGetConstant(), SCIProwGetLhs(), SCIProwGetRhs(), SCIProwGetVals(), SCIProwGetNNonz(), SCIProwGetCols() then give you information about each row, see previous question.

    You get a columnwise representation by calling SCIPgetLPColsData(). The methods SCIPcolGetLb() and SCIPcolGetUb() give you the locally valid bounds of a column in the LP relaxation of the current branch-and-bound-node.

    If you are interested in global information, you have to call SCIPcolGetVar() to get the variable associated to a column (see next question), which you can ask for global bounds via SCIPvarGetLbGlobal() and SCIPvarGetUbGlobal() as well as the type of the variable (binary, general integer, implicit integer, or continuous) by calling SCIPvarGetType(). For more information, also see this question.

  6. The terms columns and rows always refer to the representation in the current LP-relaxation, variables and constraints to your global Constraint Integer Program.
    Each column has an associated variable, which it represents, but not every variable must be part of the current LP-relaxation. E.g., it could be already fixed, aggregated to another variable, or be priced out if a column generation approach was implemented.

    Each row has either been added to the LP by a constraint handler or by a cutting plane separator. A constraint handler is able to, but does not need to, add one or more rows to the LP as a linear relaxation of each of its constraints. E.g., in the usual case (i.e. without using dynamic rows) the linear constraint handler adds one row to the LP for each linear constraint.

  7. The variable array which you get by SCIPgetVars() is internally sorted by variable types. The ordering is binary, integer, implicit integer and continuous variables, i.e., the binary variables are stored at position [0,...,nbinvars-1], the general integers at [nbinvars,...,nbinvars+nintvars-1], and so on. It holds that nvars = nbinvars + ninitvars + nimplvars + ncontvars. There is no further sorting within these sections, as well as there is no sorting for the rows. But each column and each row has a unique index, which can be obtained by SCIPcolGetIndex() and SCIProwGetIndex(), respectively.

  8. A variable v is implicit integer if it is guaranteed to take an integer solution value in every optimal solution to every remaining subproblem after fixing all integer variables of the problem. Implicit integer variables are represented by the variable type SCIP_VARTYPE_IMPLINT. Note that continuous as well as integer variables can be declared implicit integer.

    The solver benefits from the presence of implicit integrality in several ways: Implict integer variables can be treated like continuous variables for branching because it is not necessary to enforce integrality. They can be treated like integer variables to yield stronger propagations, better coefficients in cuts, etc.
    Another advantage of this definition is that it allows heuristics to know that if a feasible point has integer values for all integer variables, then either all implicit integer variables have integer values or the value of an implicit integer variable can be assigned to an integer without worsening the objective value.

    Currently, SCIP identifies implicit integer variables via feasibility reasoning and optimality reasoning during presolving.
    As an example of feasibility reasoning, consider the constraint x + y = 1 where both variables are integer. From this constraint, either one of them could be declared implicit integer. This is because fixing one to an integer value forces the other to be integer as well. Note that you must not declare both variables implicit integer, since the reasoning depends on the other variable still being of integer. type
    As an example of optimality reasoning, consider the problem max x subject to x + y <= 10 and y has to be integer. After fixing y to any integer value, the optimal value of x will be 10 - y which is an integer. Hence, x can be declared implicit integer. Note, however, that if the constraint is 2*x + y <= 10, then we cannot conclude that x is implicit integer.

  9. There are various numerical comparison functions available, each of them using a different epsilon in its comparisons. Let's take the equality comparison as an example. There are the following methods available: SCIPisEQ(), SCIPisSumEQ(), SCIPisFeasEQ(), SCIPisRelEQ(), SCIPisSumRelEQ().

    • SCIPisEQ() should be used to compare two single values that are either results of a simple calculation or are input data. The comparison is done w.r.t. the "numerics/epsilon" parameter, which is 1e-9 in the default settings.
    • SCIPisSumEQ() should be used to compare the results of two scalar products or other "long" sums of values. In these sums, numerical inaccuracy can occur due to cancellation of digits in the addition of values with opposite sign. Therefore, SCIPisSumEQ() uses a relaxed equality tolerance of "numerics/sumepsilon", which is 1e-6 in the default settings.
    • SCIPisFeasEQ() should be used to check the feasibility of some result, for example after you have calculated the activity of a constraint and compare it with the left and right hand sides. The feasibility is checked w.r.t. the "numerics/feastol" parameter, and equality is defined in a relative fashion in contrast to absolute differences. That means, two values are considered to be equal if their difference divided by the larger of their absolute values is smaller than "numerics/feastol". This parameter is 1e-6 in the default settings.
    • SCIPisRelEQ() can be used to check the relative difference between two values, just like what SCIPisFeasEQ() is doing. In contrast to SCIPisFeasEQ() it uses "numerics/epsilon" as tolerance.
    • SCIPisSumRelEQ() is the same as SCIPisRelEQ() but uses "numerics/sumepsilon" as tolerance. It should be used to compare two results of scalar products or other "long" sums.
  10. If the LP is only a slightly modified version of the LP relaxation - changed variable bounds or objective coefficients - then you can use SCIP's diving mode: methods SCIPstartDive(), SCIPchgVarLbDive(), SCIPsolveDiveLP(), etc.

    Alternatively, SCIP's probing mode allows for a tentative depth first search in the tree and can solve the LP relaxations at each node: methods SCIPstartProbing(), SCIPnewProbingNode(), SCIPfixVarProbing(), etc. However, you cannot change objective coefficients or enlarge variable bounds in probing mode.

    If you need to solve a separate LP, creating a sub-SCIP is not recommended because of the overhead involved and because dual information is not accessible (compare here). Instead you can use SCIP's LP interface. For this you should include lpi/lpi.h and call the methods provided therein. Note that the LPI can be used independently from SCIP.


Specific questions about Column Generation and Branch-And-Price with SCIP

  1. If you want to use SCIP as a branch-and-price framework, you normally need to implement a reader to read in your problem data and build the problem, a pricer to generate new columns, and a branching rule to do the branching (see also this question to see how to store branching decisions, if needed). SCIP takes care about everything else, for example the branch-and-bound tree management and LP solving including storage of warmstart bases. Moreover, many of SCIP's primal heuristics will be used and can help improve your primal bound. However, this also comes with a few restrictions: You are not allowed to change the objective function coefficients of variables during the solving process, because that means that previously computed dual bounds might have to be updated. This prevents the use of dual variable stabilization techniques based on a (more or less strict) bounding box in the dual. We are working on making this possible and recommend to use a weighted sum stabilization approach until then. Another point that SCIP does for you is the dynamic removal of columns from the LP due to aging (see also the next two questions). However, due to the way simplex bases are stored in SCIP, columns can only be removed at the same node where they were created.

  2. With SCIPgetLPColsData() you can obtain the columns of the current LP relaxation. It is correct that not all variables are necessarily part of the current LP relaxation. In particular, in branch-and-price the variables generated at one node in the tree are not necessarily included in the LP relaxation of a different node (e.g., if the other node is not a descendant of the first node). But even if you are still at the same node or at a descendant node, SCIP can remove columns from the LP, if they are 0 in the LP relaxation. This dynamic column deletion can be avoided by setting the "removable" flag to FALSE in the SCIPcreateVar() call.

  3. As described in the previous question, it may happen, that some variables are not in the current LP relaxation. Nevertheless, these variables still exist, and SCIP can calculate their reduced costs and add them to the LP again, if necessary. This is the job of the variable pricer. It is called before all other pricers.

  4. This is a very common problem in Branch-And-Price, which you can deal nicely with using SCIP. There are basically three different options. The first one is to add binary variables to the problem that encode branching decisions. Then constraints should be added that enforce the corresponding branching decisions in the subtrees.

    If you have complex pricer data like a graph and need to update it after each branching decision, you should introduce "marker constraints" that are added to the branching nodes and store all the information needed (see the next question).

    The third way is to use an event handler, which is described here.

  5. This can be done by creating a new constraint handler with constraint data that can store the information and do/undo changes in the pricer's data structures.

    Once you have such a constraint handler, just create constraints of this type and add them to the child nodes of your branching by SCIPaddConsNode(). Make sure to set the "stickingatnode" flag to TRUE in order to prevent SCIP from moving the constraint around in the tree.

    In general, all methods of the constraint handler (check, enforcing, separation, ...) should be empty (which means that they always return the status SCIP_FEASIBLE for the fundamental callbacks), just as if all constraints of this type are always feasible. The important callbacks are the CONSACTIVE and CONSDEACTIVE methods for communicating the constraints along the active path to your pricer, and the CONSDELETE callback for deleting data of constraints at nodes which became obsolete.

    The CONSACTIVE method is always called when a node is entered on which the constraint has been added. Here, you need to apply the changes to your pricing data structures. The CONSDEACTIVE method will be called if the node is left again. Since the CONSACTIVE and CONSDEACTIVE methods of different constraints are always called in a stack-like fashion, this should be exactly what you need.

    All data of a constraint need to be freed by implementing an appropriate CONSDELETE callback.

    If you need to fix variables for enforcing your branching decision, this can be done in the propagation callback of the constraint handler. Since, in general, each node is only propagated once, in this case you will have to check in your CONSACTIVE method whether new variables were added after your last propagation of this node. If this is the case, you will have to mark this node for repropagation by SCIPrepropagateNode().

    You can look into the constraint handler of the coloring problem (examples/Coloring/src/cons_storeGraph.c) to get an example of a constraint handler that does all these things.

  6. An event handler can watch for events like local bound changes on variables. So, if your pricer wants to be informed whenever a local bound of a certain variable changes, add an event handler, catch the corresponding events of the variable, and in the event handler's execution method adjust the data structures of your pricer accordingly.

  7. Variables in SCIP are always added globally. If you want to add them locally, because they are forbidden in another part of the branch-and-bound-tree, you should ensure that they are locally fixed to 0 in all subtrees where they are not valid. A description of how this can be done is given here.

  8. First check whether your pricing is correct. Are there upper bounds on variables that you have forgotten to take into account? If your pricer cannot cope with variable bounds other than 0 and infinity, you have to mark all constraints containing priced variables as modifiable, and you may have to disable reduced cost strengthening by setting propagating/rootredcost/freq to -1.

    If your pricer works correctly and makes sure that the same column is added at most once in one pricing round, this behavior is probably caused by the PRICER_DELAY property of your pricer.

    If it is set to FALSE, the following may have happened: The variable pricer (see this question) found a variable with negative dual feasibility that was not part of the current LP relaxation and added it to the LP. In the same pricing round, your own pricer found the same column and created a new variable for it. This might happen, since your pricer uses the same dual values as the variable pricer. To avoid this behavior, set PRICER_DELAY to TRUE, so that the LP is reoptimized after the variable pricer added variables to the LP. You can find some more information about the PRICER_DELAY property at How to add variable pricers .

  9. In most cases you should deactivate separators since cutting planes that are added to your master problem may destroy your pricing problem. Additionally, it may be necessary to deactivate some presolvers, mainly the dual fixing presolver. This can be done by not including these plugins into SCIP, namely by not calling SCIPincludeSepaXyz() and SCIPincludePresolXyz() in your own plugins-including files. Alternatively, you can set the parameters maxrounds and maxroundsroot to zero for all separators and maxrounds to zero for the presolvers.

  10. In many Branch-and-Price applications, you have binary variables, but you do not want to impose upper bounds on these variables in the LP relaxation, because the upper bound is already implicitly enforced by the problem constraints and the objective. If the upper bounds are explicitly added to the LP, they lead to further dual variables, which may be hard to take into account in the pricing problem.

    There are two possibilities for how to solve this problem. First, you could change the binary variables to general integer variables, if this does not change the problem. However, if you use special linear constraints like set partitioning/packing/covering, you can only add binary variables to these constraints.

    In order to still allow the usage of these types of constraints in a branch-and-price approach, the concept of lazy bounds was introduced in SCIP 2.0. For each variable, you can define lazy upper and lower bounds, i.e. bounds, that are implicitly enforced by constraints and objective. SCIP adds variable bounds to the LP only if the bound is tighter than the corresponding lazy bound. Note that lazy bounds are explicitly put into and removed from the LP when starting and ending diving mode, respectively. This is needed because changing the objective in diving might reverse the implicitly enforced bounds.

    For instance, if you have set partitioning constraints in your problem, you can define variables contained in these constraints as binary and set the lazy upper bound to 1, which allows you to use the better propagation methods of the setppc constraint handler compared to the linear constraint handler without taking care about upper bounds on variables in the master.

  11. In a column generation approach, you usually have to solve the master problem to optimality; otherwise, its objective function value is not a valid dual bound. However, there is a way in SCIP to stop the pricing process earlier, called "early branching".

    The reduced cost pricing method of a pricer has a result pointer that should be set each time the method is called. In the usual case that the pricer either adds a new variable or ensures that there are no further variables with negative dual feasibility, the result pointer should be set to SCIP_SUCCESS. If the pricer aborts pricing without creating a new variable, but there might exist additional variables with negative dual feasibility, the result pointer should be set to SCIP_DIDNOTRUN. In this case, the LP solution will not be used as a lower bound. Typically, early branching goes along with the computation of a Lagrangian bound in each pricing iteration. The pricer store store this valid lower bound in the lowerbound pointer in order to update the lower bound of the current node. Since SCIP 3.1, it is even possible to state that pricing should be stopped early even though new variables were created in the last pricing round. For this, the pricer has to set the stopearly pointer to TRUE.

  12. SCIP tries to detect whether the objective function values of all solutions must be integral or the problem can be scaled such that the former holds. If this is the case, solving will be stopped as soon as the absolute gap is below 1.0 (scaled).

    However, the detection does not work in case of branch-and-price, because SCIP cannot know whether any of the newly created variables would violate this property. For this case, there is the possibility to inform SCIP that all newly created variables will be integer and have an integer objective coefficient by calling SCIPsetObjIntegral(). This knowledge will then be exploited by SCIP for bounding.

  13. SCIP features the functionality to delete variables from the problem when performing branch-and-price. This feature is still in a beta status and can be activated by switching the parameters pricing/delvars and pricing/delvarsroot to TRUE in order to allow deletion of variables at the root node and at all other nodes, respectively. Furthermore, variables have to be marked to be deletable by SCIPvarMarkDeletable(), which has to be done before adding the variable to the problem. Then, after a node of the branch-and-bound-tree is processed, SCIP automatically deletes variables from the problem that were created at the current node and whose corresponding columns were already removed from the LP. Note that due to the way SCIP stores basis information, it is not possible to completely delete a variable that was created at another node than the current node. You might want to change the parameters lp/colagelimit, lp/cleanupcols, and lp/cleanupcolsroot, which have an impact on when and how fast columns are removed from the LP.

    Constraint handlers support a new callback function that deletes variables from constraints in which they were marked to be deleted. Thus, when using automatic variable deletion, you should make sure that all used constraint handlers implement this callback. By now, the linear, the set partitioning/packing/covering and the knapsack constraint handler support this callback, which should be sufficient for most branch-and-price applications. Note that set covering constraints can be used instead of logicor constraints.

    Instead of deleting a variable completely, you can also remove it from the problem by either fixing the variable to zero using SCIPfixVar(), which fixes the variable globally or using SCIPchgVarUbNode() and SCIPchgVarLbNode(), which changes the bounds only for the current subtree.

  14. Constraint-based branching is rather straightforward to implement in SCIP. You have to add a new branching rule that uses the methods SCIPcreateChild() and SCIPaddConsNode() in its branching callbacks. A very good example for this is the Ryan/Foster branching rule that has been implemented in the binpacking example from the examples section.

    Sometimes it might be more appropriate to implement a constraint handler instead of a branching rule. This is the case if, e.g., the added constraints alone do NOT ensure integrality of the integer variables, or if you still want to use the available branching rules. In the ENFOLP callback of your constraint handler, the branching really happens. The integrality constraint handler calls the branching rules within the ENFOLP callback. Give your constraint handler a positive enforcement priority to trigger your constraint branching before the integrality constraint handler and perform the constraint branching.


Specific questions about the copy functionality in SCIP

  1. The functionality of copying a SCIP model was added in SCIP version 2.0.0. It gives the possibility to generate a copy of the current SCIP model. This functionality is of interest, for example, in large neighborhood heuristics (such as heur_rens.c). They can now easily copy the complete problem and fix a certain set of variables to work on a reasonable copy of the original problem.

    Since SCIP version 4.0.0, the additional copying method SCIPcopyConsCompression() is available, which expects as additional argument a list of variables that should be fixed in the problem copy. These will be fixed right away at creation, so that all constraints may treat those variables as constants to potentially reduce the memory required to store the problem copy.

  2. This, of course, depends on the problem copy's intended use. The large neighborhood search heuristics such as, e.g., heur_rens.c, usually create a problem copy in which they fix a number of variables and solve the remaining, smaller subproblem only once. In this case, it makes sense to use SCIPcopyConsCompression() that treats fixed variables as constants at constraint creation time to save memory.

    For a more general use of the problem copy such as resolving with different objective functions or multiple solves for different sets of fixed variables, you should clearly use SCIPcopy() because this is beyond the scope of a compressed copy.

  3. For the variables and constraints there are the methods SCIPgetVarCopy() and SCIPgetConsCopy() which provide a copy for a variable or a constraint, respectively.

  4. SCIP would like to know if the copied problem is a valid copy. A problem copy is called valid if it is valid in both the primal and the dual sense, i.e., if

    • it is a relaxation of the source problem
    • it does not enlarge the feasible region.
    If this is the case, all reductions made in the copy can be transferred to the original instance. The problem defining objects in SCIP are the constraint handlers and the variable pricers.

    A constraint handler may choose to not copy a constraint and still declare the resulting copy as valid. Therefore, it must ensure the feasibility of any solution to the problem copy in the original (source) space.