Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
- Hello everyone, my name is Mateusz Pusz.
I'm the Chief Software Engineer at EPAM Systems.
Thanks to that I'm often working with the large code bases,
and large scale projects.
Also being a C++ trainer and a consultant,
I often being asked by my customers,
what is the best way to export
their libraries, their products,
and how to use dependencies,
and how to import dependencies of their projects in C++.
To answer those questions, I spent quite a lot of time
exploring the tools that are on the market right now.
I ended up with such as said on the slide title,
With Git you make income,
and we'll talk about them today.
First, I would like to state
that I am in no way connected with Conan team.
So they don't pay me for this.
I just found it really nicely working for me.
So, I decided to share that knowledge with you.
Second thing for today is that we have really
a lot of material to cover today and only 60 minutes.
So, if possible, please postpone your questions
until the end of the presentation,
and if time will allow, I will answer those.
Okay?
So let's start.
Let's see what we have right now for on the market for C++.
Do we have some standardized tool sets already.
I think that when we are talking about
the version control system,
everyone of you is thinking about Git right now, yes?
This is a standard that is being used
by nearly everyone.
For building a full project, we use CMake.
And this is also quite a standard.
Maybe not the best one, but it is a standard,
and we have to agree with that because
most of the project can be built with CMake right now.
For package management, we have none right now
that's being standardized and used by everyone
or most of the libraries.
My personal note here
and this is basically a request to you.
Please do not participate
our C++ development environment even more.
We already did it,
and it's right now being somehow converged
to those tools in the table, and that's good.
The thing that you will maybe find some tool
that works in one way better for you
doesn't mean that it works in all the cases.
Some tools are better in one thing,
but are worse in many other things.
There is no game changer on the market right now.
By game changer, I mean what Git did,
for example, with some version and other version
control system sometime ago.
This is a game changer, yes?
At least as long as such a game changer
will not appear on the market,
it will be good to use those tools for our projects
to make it easier for everyone
to use and reuse your libraries
and not have to fight with five different building systems,
ones trying to build your own library,
for example, with dependencies.
It turns out that, at least it's my opinion,
it seems like Conan is answering the
package management area here.
It's really good.
As my client said on the last training I did, it just works,
and I think that's the best recommendation for this.
C++ works.
It just works.
It's not that obvious, and it's really welcomed here.
Okay, so let's analyze how we handle dependencies
in our C++ projects.
First of all, it's common to use something
like an external or third party subdirectory
in our project,
and we copy paste the dependencies code there,
use CMake as subdirectory to include it and build it.
I assume that you have seen such things like this
being done in C++ projects.
Another point is you can use git sub modules
will do exactly the same.
And you can also use CMake external project add command.
That basically will download the sources for you,
but it's still it's being added
by add_subdirectory with CMake.
Those three approaches has some problems
that I will answer on the next slide.
A fourth point here is you can always download
and build and install each dependency by yourself
and then look for it with find_package.
We often do this, for example,
with GTest or maybe with Boost
that we are just installing it in our file system
and use find_package to find it.
Another point here is people try other languages,
package managers like NuGet, like Maven
to do C++ stuff.
Those are really good package managers, but not for C++.
They were created with another language in mind,
and they do not work right for us.
The last point here is the rise of C++
dedicated package managers on the market.
I think the best example right now is Conan,
and we'll talk about it today.
Why not add_subdirectory?
Let's consider such dependencies?
We have some LibA here,
and it uses two dependencies.
Boost 1.66 and GTest 1.8.0.
By this arrow, I mean that the Boost
is exported in LibA interface.
And by this, I mean that's on a private,
private dependency for project needed for unit testing,
so it's not exported with the package
when it's installed, yes?
And it works fine for this case.
However, we may find out that we have another library, LibB,
that uses a different Boost version,
a different GTest version,
and it also has OpenSSL
that's maybe not exported outside our LibB library
but still used by it in the package that will distribute.
And it still works fine.
However, when we build right now the application
that uses both of them,
and also uses GTest but another version
and chooses a different OpenSSL,
we are starting to having problems here,
especially that this LibA and LibB export some headers,
for example Filesystem or other headers from Boost
in their interfaces.
And right now we have, due to different kinds of interfaces,
different libraries of the same package
from different versions.
It doesn't scale, and it's really hard to manage.
So, we'll try to answer other points,
especially how to make it right with Conan later on,
but first, we have to take about CMake.
CMake is not a big system.
It's a cross-platform C++ build generator.
This is a typical workflow that we are using with CMake.
We can create, configure CMake script,
we can build with CMake.
We can test with CMake.
We can install with CMake.
You can see that we don't have to use
any makes or other build specific commands here,
even from Visual Studio.
The importance difference here is that some generators
allow us to specify build type
during the configurations stage.
This is the case for GCC.
This is the case for Clang for example.
But, for example, for Visual Studio Generator,
we'll provide those in build, test, and installation phase,
and not during the configuration phase.
Configuration phase, we don't know when the client
will run debug or release for us.
And this is the important difference here
and it affects how we have to write our code in CMake,
as you will see later on.
There is something called Modern CMake.
I will not repeat that.
It's a really great introduced by Daniel Pfeifer,
the author of CMake.
So, I recommend going to this presentation
and learning that.
I will just point some important highlights
from this presentation here.
First of all build flags don't scale.
every change in public have flags,
has to be propagated upwards.
By this, I mean, for example,
if you have information like incla directories
in some dependency low in the hierarchy.
You have to provide the same incla directory
up, up, up, for all the projects here
in order to be able to find these header files.
It doesn't scale at all.
If something changes in your dependency
because there is a new version of this dependency,
you have to modify all the CMake files
or the projects using this dependency,
in order to find those header files,
which is a really big problem in scaling of our
large scale software and application.
Also, different targets may require different
conflicting flag values.
Let's, for example, consider such unique name like verbose.
And one library can use verbose
in range form zero to three.
Another library can use verbose
in a range from one to five.
And how do you want to configure those two libraries
with one verbose flag in your CMake files?
It's not easy.
So, Modern CMake is about targets and properties,
like Daniel said on the previous slide.
What I mean by this?
We create targets with following commands.
We can create executables with add_executable,
and we create libraries with add_library.
The library can be shared, can be static,
and be object, can be interface,
or so-called a library alias.
Also, you may define something called important library
which means that the library
is not being to be built from sources.
Cmake just assumes that you have already binary,
and we'll package this binary as it is,
with all these CMake configuration files.
If no type is provided in add_library call,
then it's either static or shared,
depending on the configuration flag
build_shared_libs of CMake.
So, what I can suggest here
is using those specific add_library calls
for dependencies of your package,
like you having three different static libraries
that are being linked into the package that you export
as a library of yourself, of your company.
And those should be probably static
and they are linked into an output library.
This output library can be either static or shared,
and it's up to the client to decide.
So this output library should not be provided,
either static or shared.
It should be configured with CMake flags here by the client,
how he wants to use your library
that's being linked from all of those
parts that you linked with, for examples stated there.
In order to make targets and provide those properties,
there are target_xxx commands.
One of those and the most important one is
target_link_libraries.
As you can see, the first argument of those command
is a target you are either creating
or you are modifying with some attributes.
And those attributes are provided in a list,
but they have tags like private, public, interface.
If those things that you are changing
are needed by me as a package creator and by my dependers,
then this dependency is public.
If it's needed by me to build the package,
but the clients of my package
will not see it and not use it,
it's private dependency.
If it's used by the clients,
but it's not being used by me during the build phase,
for example, because we are using the header only library,
and for this, we are not building at all,
it's an interface.
And if it's not needed by me
and not needed by the clients,
then we don't bother here.
Most important part here
is that interface and public dependencies
are transitive while private are not.
So, if you will create something as a public for example
in your library, then all other dependers
that will be linking with your library
will know about it
and will get those attributes in their own library.
So we don't have to distribute those include directories
up to the top of the hierarchy because they would be
imported transitively from the library
with public dependence.
Using CMake in an old style leaves us with such a design,
of this is the large scale design of the modules
in your new project.
This is from one of the projects, this is exactly CMake
generated, grab this diagram from one of the project
I worked on.
I basically, when I started to work on, they were using
old CMake and as you can see that (mumbles) because they
have to distance to have those include directories
and other stuff being distributed up to the top
of the hierarchy.
It turn out even the 12 targets is totally not connected
to the others and they were thinking it's fine because
of the linker magic in C++.
When I refactor this, I ended up
with something like that graph.
We have (mumbles) here, layering of the modules that we use.
It's much better, this one right now is connected
somehow to the other parts of the library, and we'll also
find out that it doesn't compile it all, when I did
this thing and I had to introduce private looks here, yes?
These are problems in the design that I found out only
when using proper CMake, because I found the problems
in large scale architecture of my project, and basically
these are project and this is your problems and if you are
not agreeing, talk to John.
Basically it's a really bad idea to have second dependencies
in physical design of your library.
So I mentioned a bit about the alias targets that you can
create for libraries, they are used to create so called
something that looks like a namespace in C++ for a library.
This name place, probably you are aware of already
if you are using find bucket in CMake.
If you are doing find package details, it will render
you something, target called GTest, GTest, for example.
This is exactly that, this is exactly what alias is using
for us, it's up to us to refer to the CMake library either
finding by find package or by just defining the package
like this here, yes?
We are pinning a library and we are saying it will be used
with this, with this link here.
So then, doesn't matter if you were using find_package
or due definition, in both cases, this one will compile
because it will link correctly and we'll know what
my companion, Mylibrary means.
Another really important point
here are generator expressions.
I know that the syntax of it is really, really not friendly
and I really don't like it but unfortunately it works
for specific cases and you have to be aware of it.
The difference here is those generator expressions are not
evaluating, evaluated during the configuration step
of CMake but they are evaluated later on, for example
during build or install phase.
The use case for it can be for example this one.
You can see here that we have an if branch here
for build type.
If it's debug, we have some debug sources and in the other
case, we really source this to our library, yes?
However, if you remember, this will really only work for
DCC and Clang because they know what's debug or release
during the configuration stage.
Probably if you will work with visual studio generator,
that doesn't provide any build type during the configuration
stage, it will not work correctly for you because you
specified during build phase which type of the build
you would like and this thing happens during configuration.
So you have to type such a general expression in order
to make it work correctly, so you say that you use config,
in case you if it say config debug, then you would use
debug courses, and in other case, use release sources.
And this will be evaluated during the build phase of CMake.
The key point here is to never use CMake_build_type
in if branch, always use that generator expressions
for CMake_build_type.
Another use case for this as wired syntax is when you are
installing your repackage or maybe when you are building
the repackage, you are using the include directories
that are located in your (mumbles).
After the installation, users will use
the include directories
that are in include directory that's being on your
installation target, find system, yes?
In order to make it work correctly, you say that your
target includes directories for your target Foo are
either in the sourcery if you are building
or on the find system that you installed the software
when you are after the installation phase.
So when you are using install software.
And with that, always, your target will find correct
headers for your project.
This is the simplest example of the modern library.
First, we provide information, what CMake version
is needed to compile this stuff and configure stuff.
Then we provide the project name and the version of it.
The next phase is adding the dependencies for CMake.
And then you are defining your package.
To define your package, you basically define which files
are being used for compilation, you define which compiler
flat, I mean which versions, which pictures are being used,
I am talking about the warning levels here
to be explicit.
I am saying where will be include directories during
the build and after installation.
I provide this target include library in order to make
this library work correctly with find package or not.
And this is where I created this link, yes?
As you can see, this is quite a lot of things being done
here, this is a full featured library file for CMake
and I didn't use set command at all in this file
and everything works fine, yes?
Please avoid using custom variables in your CMake files
because they only give you some problems.
Like for example they are not case sensitive or if you are
have typos, that will just compile fine,
but will not work correctly.
This is how I organized the files on my project.
Basically I have one big directory that contains
two subdirectories, sources and this.
In sources, I have my library that is being built
and installed, it has CMake list filed that can be built,
a standalone, I mean just going
to the directory and building
would make this compile fine.
And this one never changes the compiler warning.
It's a really important point here.
You may assure that for example right now, your library
doesn't provide any warnings for your clients because
you checked it during the CI phase, but in three years,
when you have 10 new versions of JCC and Clang, maybe
they will find some bugs or some problems in your code,
will write some warnings, and then the clients will have
problems with building your application as a dependency
only because some warning was arise that they don't care
about because I assume you never, rather, not often fix
the dependency packages when you find
compilation warnings there.
So it's up to the client to decide which warnings flags
should be sent not to you as a vendor
that distributes the package.
If you want to use it, and to set some flex for your
bid, use it in this file that's being used
for development process.
Another, later, is the test directory that provides
all the unit phase and feature test there.
And those (mumbles) in the long CMake file that can
be used directly from this directory,
and there is a wrapper, the top level wrapper, as I said
that can, for example, define those flags with warnings
for you and basically it's being used as an entry point
for UID for example or for some completion process.
So basically this is the (mumbles) on Visual Studio
to develop the package.
Do not be forced to install before I can test it,
for example, during my development process.
This entry point looks like this, this is a simple file
that just adds sub directory source and add_subdirectory
test and there is no magic here.
As I said, it's being just used for ideas I use
for development and this is how the unit testing
CMake file looks like.
Basically, you can find here the important information here
is this if branch.
It says that if there is no target already known that's
called MyCompany, MyLibrary, it will find it
with find_package, or otherwise we'll reuse it already
knows, and it will know it only if this file is being
run with this top level wrapper file that already
imported the source directory here, yes, here.
So it import source and then import tests.
And this test then lately we'll know already
about the target, we'll not use find_package.
If you will be running this test standalone
from the directory it will not know about MyLibrary,
will use find_package to install it.
Or to get it from installation directory.
So, basically we compiled everything, build, verify
if everything is okay and we are done,
yes, with our package.
No, it's not right, it's asocial code as David Senkel
defined sometime ago.
We have to provide some installation interface
and some things that will allow users to reuse our library.
We have to export our headers, we have to export our
binaries, we have to provide CMake configuration files
in order for others to be able to reuse our steps
that we developed so hard.
So, basically this is what we do.
This is some brief of our CMake list file
that we already analyzed.
These are some summary of things that are there.
And you have to install stuff.
We have to say which targets are being installed.
They are exported to my library targets and it defines
where those those would end up in the file tree
and then we specify then where this MyLibraryTargets,
where they will be installed on the file system
and how the file will be called, and where will
be namespaced us, this is exactly the same Namespace
we used for (mumbles).
I know that's a lot of boiler bait here, and unfortunately
there is no simpler way to do it.
At least for now.
You also have to copy your public headers and only
public headers are needed there, and you have to pry
the information about the version of your package
that you are distributing, so you can use
the PackageConfigHelper to create this version file
and then you install it together
with custom file called MyLibraryConfig.
That you can create easily with such a few lines.
Okay, so the package flow that I'm using during the testing,
as I said I'm changing the directory to my source
directory and use the standalone CMake file in order
to compile, build and then install
my library being developed.
And then to verify if it can be imported by the users,
I can go to the test directory and also CMake
configuration step here is being done and I run ctest
to verify if it works correctly.
So with this, I'm able to verify if the package is created
and imported correctly
Okay, so this is about CMake.
CMake, it turns out, that's not the last tool
in the toolchamber to have to use.
It answer us the questions how to build project
on many different systems, on many different compilers.
It's good for it, but distribution of packages,
it's not that good at all.
Using this find_package workflow that we already defined,
we may have some problems.
For example, let's consider that we will recompile,
we'll have to recompile a new version of the (mumbles)
low in the hierarchy.
Then we'll have to recompile everything up in
our project, yes?
It doesn't scale if you have many different dependencies,
many different configurations, and so on.
And let's consider for example that you would like
to verify every time you build, your library would like
to verify if it works correctly on GCC, Clang,
Visual Studio, on both debug and release, and maybe
with some custom flacks for different builds, like for
example if you are using exceptions or not, or if you are
using specific, I don't know, C++ version, 11.14.17.
How do you install all of the stuff to file-system?
Where are the directories you store it?
How do you find them, those binaries in 16 different
versions with find_package later on?
It doesn't scale at all.
What we want is a package manager here, we would like
to have one big process that builds all of the dependencies.
It should rebuild the projects only if needed if there
is no binaries needed up to date for it.
We shouldn't be forced to manually download anything
from the internet, it should do it for us.
And if there is something already installed on the file
system we are not forced to use it, for example,
the boost that is there, we want to use different version
of boost for example.
And this has become only ask features of package management
for C++, however, when I started to work with Conan,
I found out that I have requirements that I didn't
know about either.
It was exactly the same for me when I started to work
with Git, I never thought that I would be able to go
offline to my lab with a pen drive and do (mumbles)
system there on my pen drive, yes?
This is game changer and it opened new eyes for new
requirements, the same did Conan for me.
These are the features that I, right now, require
from package managers.
I need decentralized servers, the same as Git.
I need to have a server for Open Source, I need a server
for my private stuff, I need a server for my private stuff
to play with.
I want to have a central cache on my local PC of packages,
so for example, if I have 10 or 20 different libraries
that are using GTest or Boost, I don't want to store
the sources and binaries in every subdirectory
of those packages on my disk, I want to start, if I'm
compiling the same version of the same settings, I want
to have only one binary on my file system,
otherwise I will have enough on my SDD disk space
for three projects only.
Talking about Boost, for example.
I would like to have support to always compile my projects
from sources if needed for performance reasons because
if you compile (mumbles) sources, it's the fastest
possible, but it takes time.
If I want to spur sometime, if I want to make it faster
I would like to use prebinaries if they are available,
for CIA for example.
And in some cases, you don't have sources at all.
You have closed source libraries distribute only
binaries and headers, you have also some development tools
like I don't know, CMake or NASM or other stuff
but you don't have the sources for it.
You don't have binaries that you want to package them.
The last point here that's really important for me
that I found out is offline use.
(mumbles) Git.
With Conan, I'm able to start a new project in a plane
going to CppCon and it works fine and it will install
and compile the GTest for me if needed, because it
already has GTest sources in its caches.
So I can work offline and then work everywhere with it.
It's a really nice feature.
So talking about Conan, Conan is open source software
with MIT license, is decentralized
as you probably already noticed.
Servers are dummy, they're not doing anything, they are
just packet storage, all of the logic is stored
in the client.
The client is responsible for packaging for building,
for installation stuff.
It's portable to every platform that supports Python
and it uses Python as a scripting language.
It works with any bid system right now on the market
and it's easy to host.
This is how you host packages, you have Jfrog artifactory,
conan_server, and Jfrog bintray here,
and this is our client.
Client is a simple control term, terminal application.
It is possible for package creation and consumption.
It also (mumbles) your PC, as I said.
It has really nice features and allows you to work offline.
Jfrog Bintray is the most common, most popular server
with Conan's packages.
It's free for open source and you need to create an account
only if you want to upload stuff.
If you want to reuse stuff, you don't need an account there.
Conan server is a really simple TCP server of Conan
and I don't recommend you to use it, but what I recommend
you to use in your company is Jfrog artifactory.
That is a high quality server and basically with
community edition, it's free to use, which is really nice.
The most popular Conan remote on bintray is Conan-Center,
Bincrafters, and Conan-Community.
There, you will find different packages already
with a high quality and you can start playing with them.
To start playing with package, you have to know how
the package and the defier works in Conan because it's
quite complicated, it has package name, it has package
version that's basically a string, but most cases,
it's a number, but it's a string
and you can provide strings there.
The user, means who is the owner of the package, so for
example, you may have your own boost compilation
on your servers and it will not, it will just work fine.
You may provide some different flex for it or you may
have some patches for it, and still, it will be named
boost but it will mean that it is boost of your company
(mumbles) configuration use.
It works really nice in this case, it's fine.
And last part here is a channel.
It means what is the stage of the package you are using?
If it's a stable or testing, and I mean it, this is a state
of the packaging source code,
not the packaged software itself, yes?
So basically, it's the Conan stuff, what is the state of it?
Conan Packages looks like this, this is on the server view.
You have here information about the export, you have here
so called the recipe files, and you have one or more
binary packages on the server.
The binary packages are identified by this hash, that is
created from some specific setting set options.
This is how it looks on your file system, it looks (mumbles)
really the same.
What's the interesting part here is that you have
for example only one hash here because I needed only one
hash for my build, I don't download all the binaries
from the server only to use them.
If I need to use only one, yes?
And (mumbles) this one, I cannot do it by itself.
It downloads only this one that's needed and the rest
stays on the server all the time.
To search for a package on the server, use Conan search.
Basically, Conan search, with (mumbles) options, verifies
what you have already in local cache, but if you want
to check on the remote, you provide the remote name
and for example this will print all the Gtest versions
that are on Conan-Center right now.
And this will provide the information
about specific package.
I will show you how it looks on the next slide.
And this for example will create you the matrix,
all of the binary packages available on the bincrafters
servers for Gtest.
This is how package information looks like.
So basically if you can see here that are some options
used for the package, like for example if (mumbles)
built with gmock or not, because it's a gtest, yes.
And also there are some settings, settings are operating
system and build specific stuff, like which compiler was
used for compilation, what was the architecture.
Is there a debug already, for example.
And all of this information's available for the Conan line.
You can also inspect the dependencies of the package
with Conan info.
With (mumbles) you will just refer to the Conan file
and will verify your current project dependencies.
You can verify dependencies of specific package.
You can basic generate a graph of those dependencies.
This is this information looks like for the dependencies.
For example, for OpenSSL here, we have information
that it uses zlib and then we have zlib information
saying that says it is being acquired by OpenSSL.
To install dependencies, you run Conan, install command.
When you have provided the package identifier
and the generator, you can do it manually, but in most cases
we are using so called recipe files.
Recipe files are conanfile.txt and conanfile Python script.
And we can use profile files in order to say which
configuration we are using for building.
If you are building for Visual Studio or we are building
for (mumbles) or we are using debug or release, you can
create as many profile files as needed.
If a profile file is not provided, then the special profile
named default is being used for you.
You can also say which, how to handle specific dependencies.
If they should be built, if they are not exist, or not.
Or it should be just Conan installation error saying that
you don't have binaries for this package.
This is how Conan file looks, this recipe file, yes?
It has three stages, requires options generators here.
Basically, CMake generator provides some CMake variables
and functions, but there are generators for any big
system you can use, we (mumbles) CMake here
because as I said,
CMake is the way to go for C++.
And this is the same file, but in a Python version.
You can see that it provides exactly the same information
as the text file.
When you want to provide some options in those files,
you can say, for example, that Gtest will be built
as a short library and this is the same in both files here.
You can also override this from the common line
and you don't have to modify the recipe file.
When you have tried to run short libraries with Conan
and the other big system, you will find out that
probably you will have problems that your application
doesn't find the binaries yet
in order to execute your product.
So for this, we have import command, that imports
the binaries for to your local directory
and makes it possible
to find (mumbles) application that you are building.
If you need more from Conan, you can still work with Python
and extend it, for (mumbles), you cannot do much more
but with Python you can have more.
For example, you can provide a build method that will say
how you build your project.
With that, you can then say only those two comments
on your comment lines, say that you want to install
(mumbles) 2017, and then build it.
And you don't have to bother with CMake commands at all.
Conan will build it for you based on this, yeah.
Recipe provided in the build method.
You can also provide some options to your package,
saying for example that if you're set in testing tutorial,
you will be using some dependencies
in the testing version of it.
When you test (mumbles) zero.
But however, where if you would set testing to false,
which is a default, you will (mumbles) the stable version
should not (mumbles) dependencies your project.
(mumbles) some additional logic requirements, method
in your Conan recipe file.
And there are many, many, many more features.
We'll see a bit more later on.
Conan provides, I already mentioned them a bit.
There is a default profile
and the profiles you create by yourself.
They have settings, options, and environment variables
and build requires elements there.
For example, this file looks like this, that you have
information about that operating system you are using,
operating system that you are building for, this is used
for cross compilation.
Which compile you're using, which version, what is
the version (mumbles) library you are using.
Should it be a release or debug, or just to give
(mumbles) runtime version you are using.
For example, environment variables here in Clang
provide how to find
Clang compiler on your file system.
You can override using the NA profile settings,
with minus S option, and to this, it will reuse all
of the options from Clang profile and use debug, yes?
As a build type.
Yeah, you can have profile files either in the default
Conan directory profiles or you can store it together
and distribute it together with your project.
It's distributed here and you can also include
other profiles inside.
Conan packages.
One (mumbles) packages, we have a few options
to consider here.
First is that the package is created independent
from the package project source code, so for example
you can creature your own package source code,
to package boost or package Gtest and not have to modify
the Gtest code, yes?
You create another repository that has only
the packaging code there and the Gtest is another
repository that you are using for packaging.
So it's used for free package (mumbles).
And when you are packaging your own library, you can also
do different options here.
You can export all of the sources on the package servers,
on the Conan servers together with view package
or you can still use the code
from the repository like Github.
And only store the recipient on the Conan server.
You can package existing binaries, so for example
in case when you have only prebuilt binaries,
(mumbles) boo for three parties or you want to,
for example, you've already created some expensive
CI for your project and you have everything built
on your disk and you don't want to rebuild everything
from scratch to create a package.
You just want to reuse what you already have,
then you can use it.
And the last point here is when you are packaging
developmentals, I assume that you are not compiling
(mumbles) or Clang or other compilation tools
every time you build your project, yes?
So basically you use those binaries and you provide
them also in Conan as dependents.
So you can install things like NASM for example
together with OpenSSL package in order to make it compile
from source as needed.
And in the later stages of the presentation, we'll scope
mostly on the point one here because it's the easiest
one here, but others are very similar.
So this is how such a package looks like.
It has a Conan file recipe file, and it has license
of view packaging content.
Not the packaged content, but your packaging,
so it will show right here.
And it has test package directory, that basically is
a client of your application and a configuration
and installation parts are verified here.
This is how such a recipe looks.
You provide basically the package information.
You provide the license of the package products,
so what is the license for the (mumbles) benchmark stuff.
Where to find it, your source code for the packaging
exports the license of your packaging software,
so what's the packaging software, packaging code license.
Which settings we will use to identify binaries.
Each change in those settings will create a new binary
in Conan, and also options.
That is once again changing those.
We'll make a new package in Conan.
This identifier, this hash will be different if you change
any settings or any options here.
You can provide also the defaults for those options,
as you can see.
And you can provide a new attribute code CSM, that
is information where you store your package.
That will be packaged in your product.
So where is the package, you being packaged?
Important point here is to understand what the difference
between settings and options?
Settings basically are the project Y things,
OS specific things, compile specific things
that are provided during compilation stage.
And options are the things that the user may provide
if he wants to build Gtest with gmuck or not.
If you want to enable, for example, exceptions
or not in the project.
And for this, you can provide default options.
Other useful attributes in the recipe file
will be generators, basically you would want to have
generators if you have any dependencies in your software
however, you can also use them for setting (mumbles)
in CMake for example.
Requires will provide you the dependencies of your package.
You can have private dependencies, for example here,
and you can also provide version ranges but basically
it's not, it's supported but not considered
as a good practice as you will learn later on.
Other attributes that may be useful is our build_requires.
This is our dependencies, but used by the build process
and it's not being used by the clients.
Exports will copy all the provided files to the package,
like for an example you can export the license.
Third, exports_sources creates a snapshot of your sources
so this will be this point two in packaging scenarios.
You can create the homepage, where people can find
the project being packaged.
And we said having attribute, there are also methods
that can be used in Conan recipe file.
Let's create such a helper method, called configure CMake
that will basically create a CMake helper here that will
provide some definitions here, and we'll run configure
at the end.
This CMake will be used later on by build and package
methods, but it's important here to note that
you may find that there's options here
that is not being set here for CMake, and this will
modify this shirt or (mumbles) CMake
and basically it's so Conan flag that Conan
does it by itself in this helper.
So you don't have to bother about it in your source code.
Another important point here is that for example,
Google benchmark has testing set to on by default
and basically you don't have to pay for it every time
you build your package from sources,
because you don't bother with it
too often and also, it's not a good option to export this
as a recipe options because either way if you will set up
a true or false, it will not modify the contents
in the binary package that's being exported from Conan.
But as I said, setting all different versions or setting
different values for options will generate a different
package ID, so different package for you.
So basically you will export and distribute a separate
package for building with testing or building
without testing, but basically the contents
will be the same, which is not the wise.
So we have a bit method here that basically takes this
helper, creates CMake and do CMake bid.
We have package that also configures CMake and makes
install and capitalizes to the package being distributed.
And we have package info saying how the clients
will link with our library, where to find information
of it, we say that it's a CMake benchmark target
that stores all the information needed
in that we're to link with this, too.
Other useful methods are source, basically it will provide,
download the sources from Git, for example.
It's being used mostly when you're not using this SCM
attribute that I've shown you here.
Requirements, so this provides more advanced logic
for requires attribute.
Build requirements are the same but for building specific.
Dependencies, imports, as you've already seen in the shirt
library case when you were importing the binaries
in order to be found by the shirt library.
And package ID, that defines how your hash is being
created from settings.
For example, you have specific configuration for
(mumbles) libraries here.
You may have configured to specify how you configured
the package, and the (mumbles) sum options
in some specific cases, for example, you may say that
fPIC is not available on the Windows.
And in such case, this package will not have this option
for Windows setting.
The test_package directory basically provides you simple,
very simple test for package verification test.
It is not unique testing, it is not feature testing.
It's just mostly linking test in your Conan directory.
This is how the recipe file for it will look.
What you find here interesting is that it doesn't have
any requirements close and it requires attribute,
because Conan will provide the requires for you
with the version you are creating right now,
so you don't have to update it every time
you are building a new version.
To create a Conan package, you use Conan create method.
Command, let's say.
And it will export the conanfile Python script and all the
exports to the local cache.
Then it will build an install package.
It will create a temporary directory and test package
directory, and will use install to download your
library's independency and verify if it's built correctly.
The same stuff once again, and at the bottom you have
our local directory, in the upper part you have
Conan local cache.
When you are doing export or create, all of these things
are being exported to the local cache.
So we export the recipe and we export sources,
(mumbles) export source, (mumbles) provided.
Then all of this is being copied to source directory
and then SCM attribute is being used to download
the rest of the sources, or the source method is being
called to do some logic, for example, to download
the sources or maybe patch the sources.
Because you may have placed some patches here,
do some modifications to systemic files, and so on.
And then the sources are being copied
to the binary directories.
It's being done because sometimes during the build phase,
you modify the sources and you don't want to have (mumbles)
directory that is being used by all the builds here
and then you build it, with build method.
After that, when the build finishes correctly,
we have package method being run that basically
copies everything to the package directory,
install stuff and this is what you distribute
to the bin trace server later on.
When you run client import, install story,
it will import all of the binaries and will create
some stuff with generators.
When you are developing a package, you may use this
development flow in Conan.
So you start with Conan create user channel.
If you find out that the source is created correctly
but something else later on fails, you may run it again
with keep-source so we'll not pay for this stage again.
You will start from this point.
When you find it builds correctly, you may say keep build
and thus verify if this package is correct or
(mumbles) are being run.
You can rerun the tests with this command if you need it.
Here we have also even better (mumbles) those commands,
(mumbles) you may only run sources or installation process
or build process or packaging process or testing process
with it, or even you may have even (mumbles) with additional
options for build, that's configure, build, and install.
Those are most (mumbles) steps here, during build phase.
What does it mean to distribute
a good quality package in Conan?
Basically your Conan recipe file should have a description,
name, license and some hyperlink to your homepages,
to licenses and so on.
Should use lower case letters for package names.
It should have test package, that will verify
if your package compile's fine.
I should raise errors on every invalid configuration.
It shouldn't use version ranges because there are some
problems, especially (mumbles) with it.
It should package the both the license of the project
you are packaging and also your license,
as in package outer.
And you should use the clean Python and the latest
features of Conan because they are helping a lot.
If you are done with package creation, probably you want
to share it somehow with others.
For that, you add some remote to your Conan with subject
command for example for bintray, and when you have
remote added, you upload it with upload command.
Minus all here means that you are uploading all
of the binaries being created in your local cache.
So the bit rate.
And you are done, basically with it.
It's all about the Conan, you have to know in order
to create packages, and we'll scope right now more on
the corporation of Conan and CMake.
So we have four generators being used for CMake.
CMake, CMake_multi, CMake_paths, and CMake_find_package.
CMake is a basic generator, this is the oldest one.
It creates Conan building for CMake script,
that keeps variables and some hyper functions there.
In order to run them, you have to run Conan basic setup
code and this is how you do it.
You have to modify CMake list file in order to run those
command here, targets means that you are using target
version so modern CMake is being used instead
of the old CMake.
So if you not provide targets,
old CMake style is being used.
I recommend you using this in branch here so it's not...
It will compile, if you are using CMake with find_package
or you providing Gtest root or other variables
to find packages and it will not force you to use
Conan all the time.
It's (mumbles) for use right now, Conan, it's not
standard yet, not everyone is using it.
You shouldn't use, force end the users
to use Conan right now.
Maybe in five years when everybody will have Conan
maybe on their development platform will tell us if
branch will not be needed here, but for now,
it's an option, not a must.
CMake_multi generator is the one used for example,
Visual Studio, it doesn't specify if you are using
Debug or Release during the configuration phase.
It's being specified later on.
It has some problems with it, it cannot be used
for package creations, it will make find package
in CMake not working correctly because it will not know
if it has to find information
for Debug or Release for example.
So basically I don't recommend using that one.
The new generators in Conan are CMake_paths
and CMake, I don't remember the next one.
Find package, basically the CMake_paths will create
the CMake path CMake script that has only two
variables there, CMake_module_path used by find_package
and CMake_prefix_path needed by find_library.
In order to use it, you can either set on this one line
in your CMake script or you can even use it without
any modification of CMake files, with specific CMake define.
CMake project include with your project name here.
And in that case, it's not (mumbles) if you don't have
to modify CMake files at all.
CMake_find_package_generator will create your find
CMake_scripts, and it's possible to use without any changes
to the CMake when using with Python script, when using
with (mumbles) you may have to modify CMake file
or use this together with CMake_paths.
At the end, we have only three minutes left, I would like
to show you some BKMs I found during
the package creation, so this is how you can automate
CMake file modifications so you don't have
to do it manually.
Basically you can use tools helpers here to replace
the line saying project MyLibrary, project MyLibrary
and in this Conan information inclusion code.
Or you can do it another way, you can download
all the sources to sub folder and export some small
CMake list file, that's the source you have here.
So basically it's a wrapper that does Conan specific stuff
and runs the CMake from subdirectory source folder, yes?
If you want to run unit test during the build phases,
you may use Conan_run_tests environment variable.
There are some additional hyper methods used here,
like build requirements that I will specify some logic
for full requirements during the build phase.
You provide some definitions and you use Conan_CMake_tests
in build step and if this variable is set.
In order to use it, you either modify environment variable
or you provide this in a profile file or you run
this comma with minus B.
You can also provide custom find CMake groups
with your package, this is in case if the package
that you are packaging is asocial, so it doesn't
provide any installation directories and instillation
information with it.
Or maybe the file that's being created, it looks for example
always in C/directory in Windows, yes?
So basically it will not work with Conan.
For this, you just create your new find CMake file
and you make exports_sources for it and then copy it
with package to the place where package is being stored
and it will be found by the CMake later on.
The last BKM I wanted to share with you
is do not use the class attributes, class variables
to share a state.
It will work fine when you are running the Conan create
function because this command will run all of the member
functions of CMake, however, if you would like them
to repeat some steps only like for example, run
Conan package command, this will run only this function
and this function will not set those variables because
those variables are being set only in source.
So it will not work for you.
So that's all.
As a summary, CMake.
Many projects do not use CMake still.
I would recommend them to start using it because it's
a standard on the market.
If they are using CMake, sometimes they don't use modern
style of CMake as you can see (mumbles) for you,
so I recommend strongly using modern CMake there.
And be social, yes?
For Conan, I found it to be
a product quality package manager.
It's designed with C++ in mind.
So it works for us really good.
It's for free, MIT license, server results are are free.
It's quite easy to use if you understand this flow
and how these packages, what are the steps
in the process and all of this is really well documented.
So there are like 400 papers, 400 pages in implementation
regarding CMake, (mumbles) Conan.
So it's like Conan documentation for free, look for it.
So give it a try.
With that, it seems I'm out of the time right now,
so thank you everyone for coming.
I can answer the questions during the break
later on but probably we should
stop the recording right now.
Thank you very much.
(audience applauding)