Diagnostique -> Design -> Livraison

Nos Solutions

Dans un monde façonné par les algorithmes,
dataLearning fournit des solutions technologiques et éthiques efficaces
dans un souci d'excellence, d'intégrité et de clarté
Architecture des données

Batch processing ou streaming en temps réel?
Cloud ou in-house?
Des silos de données au lac de données,
chaque solution est spécifique à votre situation. Nous vous aidons à trouver la technologie qui corresponds le mieux à votre situation et à votre équipe.

Gestion des données

Les données sont un actif pour votre société
mais pour générer de la valeur, vous avez besoin d’un cadre de gouvernance.
Nous vous aidons à faire l’inventaire de vos données; nous définissons les processus et contrôles nécessaires pour établir un catalogue de données au service de l’entreprise.
En utilisant des tableaux de bords, vous pouvez visualiser vos données et y découvrir de nouvelles informations.

data Science

Apprentissage automatique,
intelligence artificielle,
optimisation, statistiques,
analyses avancées,
preuve de concept,
nous implémentons les algorithmes qui vont répondre à votre problème en utilisant des données structurées ou non.
Les projets le plus complexes sont développés en partenariat avec la recherche académique.


Nous aimons créer une culture du partage du savoir et de nos connaissances.
Nous organisons des formations sur les technologies big data, la visualisation des données, la science des données et des ateliers sur la culture des entreprises exploitant les données.

Talent acquisition

dataLearning est un partenaire pour les experts en data

Nous accordons de l'importance au facteur humain chez nos clients comme au sein de notre équipe, dans un climat de confiance, de développement et de respect mutuels.

Chez dataLearning, nous vous offrons plus qu'un travail. Nous pouvons vous accompagner et vous aider à progresser et à développer votre propre identité culturelle.

Si vous êtes un expert en programmation ou en étude statistique, contactez-nous :

  • Profitez d'un accompagnement personnalisé
  • Développez votre propre identité culturelle
  • Faites la différence

Notre équipe dirigeante

Une équipe de consultants seniors, experts en gestion de données, infrastructure BigData et algorithmes pour profiter de la révolution digitale
Michael Butcher
Michael Butcher
Christophe Le Lannou
Christophe Le Lannou
CEO & Founder
Mikaël Dautrey
Mikaël Dautrey
Infrastructure Expert
Marion Henry
Marion Henry
Administration – Documents Management
Jean-Baptiste Caillau
Jean-Baptiste Caillau
Mathematical Expert
Anne Bioche
Anne Bioche
Talent acquisition & Communication Manager


Dans un souci de qualité et efficacité, nous travaillons avec des experts dans chaque domaine

Questios sur les data:

Laissez-nous un message

L'art des data

conférences en data, articles, livrets blancs
  • Is the Degree of Freedom a good predictor of Market Regime?


    This article introduces the definition of Degree of Freedom and why we think it is a good indicator of market regime changes.
    We will illustrate it with a study over three distinct market periods: the financial crisis of 2007-08, an example of a market bottom, the 2010 European sovereign debt crisis and the flash crash of May 2010. In each of these examples, the Degree of Freedom appears to be a good indicator of the change in the market dynamics as indicated by the VIX.
    Further work still needs to be done, but at a time where alternative indexes are very popular, the Degree of Freedom could be a useful tool to price options on indices without listed options markets. Get in touch, if you are interested to discuss it further.
    By Xiyu Zhao & Christophe Le Lannou

    Free Download

    Please complete the following for access to the free download:

    More information on our methodology and our calculations are available upon request.

    Is the Degree of Freedom a good predictor of Market Regime?
  • Practical challenges in data-driven automation


    Some practical challenges in data-driven automation within the financial services

    Through the process of automation, a software can replace repetitive and manual works around data which were previously done by humans. Thus, the numerous monotonous rule-based processes which are found in financial services can now be dealt with more efficiently. However, marketing teams or journalists mislead us by stating that a mysterious ‘Artificial Intelligence’ will instantly take charge of those dull processes. Indeed, these transformations are not mysterious and easy at all; they consist of building expert systems which combine business, technological expertise, and digital data. Therefore, it does not surprise me that the projects not taken seriously are failing to deliver the promised results. In this article, I propose some of my experiences of working in financial services, whereby data should seamlessly flow across the entire organisation.

    This week, I will focus on one of the main challenges: fully understanding the business processes in all its details and decomposing it in a process flow. Recently, I worked on implementing an automatic platform in order to value proprietary market indexes. Indexes like the CAC40 or the S&P500 are typically weighted averaged of stock prices. Even though the algorithm to calculate market indexes could not be easier, the automation of the process was a challenge. Indeed, before automating the process, computing market indexes was done in Excel. It needed a team of 3 working 10 hours a day to value several dozen indexes. It was not only a slow, manual and repetitive process but also prone to error and not scalable. They drastically needed a platform to evaluate hundreds of indexes on a daily basis, whereby the results would be reliable and transparent and to be sent to clients to value their financial contracts.

    This might initially be perceived as easy: you have your weights and your stocks prices, which you combine to give your weighted average. Working with over 2000 stocks prices, spread over 20 markets and 10 time zones meant we encountered every day a multitude of special cases. For example, we dealt with dividends, corporate actions (stock’s split or merge), stock suspension, or market holidays. Thus, as per the saying: the devil is in the details. All these possibilities had to be systematically handled with. Some cases required access to new data sources, for example, the market calendars, or others required a mathematical algorithm. Suddenly, your process becomes increasingly complicated.

    Our main process flow took us a week to design and to access reliable extra data sources several months. Furthermore, updating the index weights was an even more complex task. Progressing with our implementations, indexes were constantly added to our new platform; and the law of large numbers induced numerous situations to shift from rare to common, such as dividends declared in a different currency than its stock prices [European oil companies like to declare their dividends in USD]. The frequency of these new events, their complexity, and the cost to implement them would influence our decision on whether to integrate them into our process flow or to raise an alarm for a manual intervention.

    Finally, the human factor was a fundamental part of the success of this project. As the index team was not enjoying the repetitive tasks, they welcomed this project and the new platform it offered. It relieved them from an increasing pressure of reliability, audit, bureaucracy and a higher workload. Since the beginning, they were engaged in the project and shared openly their expertise and business knowledge. To this day, they are now able to deliver more indexes, with a higher quality, more transparency, less stress and to be more focused on complex situations and questions from clients.

    Due to a successful automation, our client is now able to comply with its benchmark regulation at a minimum cost, having reduced its full-time resources by three, provide a better service to its clients and expand further its activity.

    Next, I will discuss the importance of data governance and risk management in your automation process.

    Christophe Le Lannou

    Practical challenges in data-driven automation
  • Towards a professional code of ethics

    Mike Butcher

    For Society of Data Miners
    At the Alan Turing Institute
    London 20 March 2017

    Towards a professional code of ethics for data mining

    Good evening everyone. And thank you Tom for your kind words.

    I have been asked to address the general approach to drafting a code of best practice, drawing on the wide and diverse experience of the members of Society of Data Miners and other stakeholders.

    I have no slides and will begin with some general remarks about the context in which a professional code of data mining is to be developed. And then suggest a process for producing one. This at a time when technological progress for example in stocking and processing data is causing the future to rush towards us with incredible volume, velocity and variety – and with Artificial Intelligence being used as a decision making device.

    Let me start by saying that I have experience of both major companies and start-ups and have run business ethics compliance programs – but I am new to the world of data mining and the explosion of economic activity around it. But so are we all. That said I do not want to pretend to the technical knowledge that most everyone in this audience is likely to have. In due course this evening we will have the opportunity to hear what you have to say – including ideas of your own. I really mean it when I say all critical comments are very welcome – especially when they are not accompanied by a personal insult.

    Given all the unknown unknowns coming our way we must, I think, start provisionally. And even if we produce a code – we should think of ourselves for some time as working on a draft that is going to change – and rapidly. An ethics Code will need to be updated regularly. Not because what is right and wrong changes but because circumstances in which right and wrong are calculated change. That said at any time the draft we have must be the best we are capable of putting together at that time.

    And we should proceed as if we are working under a voluntary regime that we devise for ourselves – that is self regulate within the laws and regulations under which we operate.

    Ethics are rules governing our behavior based on ideas about what is morally good and bad, right and wrong. For many these rules come from their religion, and even those with no religion use words and concepts forged in a religious context which still bear its imprint. And here in London, we have many cultural traditions and must not presume an ethics based on a Judeo-Christian religious perspective. We have a muslim Mayor and many, many secularists.

    Pragmatically this may not matter, though we should be sensitive to it. In giving business ethics advice I have found it helpful to suggest people ask themselves “Would I lose the respect of a close family friend if she or he knew I was doing this or complicit in it?”

    And here I would like to say that most large companies take the view that even where the law is permissive directors and employees should take the high road and not do something they believe is morally wrong.

    So right or wrong in what context? Well we already have principles that have been around for a while in relation to data processing. Some of these are already law with wide acceptance in many parts of the globe.

    Purpose limitation, proportionality, adequacy and accuracy of the data, limited retention, security and confidentiality, transparency, rights for the data subject, and inhibition of machine-made decisions.
    And what is difficult is that these established principles may clash with what technological progress throws up – for example with Artificial Intelligence being used as a decision making device.

    Through the General Data Protection Regulation (GDPR) the EU intends to give citizens back control of their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU and by strengthening and unifying data protection for individuals within the European Union (EU) as well as addressing the export of personal data outside the EU.

    So in thinking about what is right and wrong we need to take account of the developing law under the GDPR and elsewhere and think of where there is wriggle room and ask ourselves if even if something is legal whether we ought to do it anyway. Law and morality often overlap and it is sensible in a democracy under the rule of law to start off with the idea that morally you should obey the law. But there are many areas where morality takes over where the law fally silent. We do not seek to have everything governed by law and there are large areas where the law is silent and society relies on morality and sometimes only good manners to inform people’s right minded actions. And even if you have a legal right it may be immoral to enforce it.

    Thus there are important things for our code to consider other than legal matters – where morality is relevant in non-legal but extremely important commercial issues such as, Reputation, Brand, Integrity, Trust, Risk, investor expectations both for large corporations and SME’s.

    These are all wide areas and I would like to propose that the way to start is by developing a program which uses the talents and experience of our members and associates to prepare and produce the input and iterative review for a code as it is developed.

    The great Victorian codifiers of Company and Sales of Goods law did not sit down and off the tops of their heads invent the Companies Act and Sales of Goods Act. No they looked at the myriad of individual cases and decisions over the years – which were mainly about doing justice between individual parties and protecting investors and purchasers from many different and inventive kinds of malfeasance.

    So a professional ethics Code should give guidance for companies and individuals on how to ensure what is bad and good is actively thought about as it goes about its lawful business – and to reconcile the rule of law, morality and the development of data processing which is inherently disruptive of existing norms. It may actually be immoral to resist change which brings great benefits to society. It is possible for example that driverless cars, using massive amounts of data and Artificial Intelligence, will save more lives that the system it replaces with so many deaths caused by human error.

    A Code has to be useful to businesses whilst meeting the reasonable concerns of a wide range of stakeholders – who need to be identified -and taking them with us. It is arguable that for a company not to have auditable internal processes and controls that ensure the rightness of its actions are considered or systems whose architecture does not protect personal data, is itself immoral.

    The Code should encourage companies, their directors and employees to value and engage with its data mining professionals, inside the company and outside, in assuring its ethical conduct. The role of Chief Data Officer, or her or his homologue, should formally be recognized – not by giving the CDO the power to decide what is ethical – but by involving her or him both formally and informally in internal processes and compliance programs – helping elucidate the issues. Ethics in data is not the province only of the Chief Data Officer – it is everyone’s business.

    The code should assist the Company, its directors and employees in achieving good Corporate Governance, and meeting Legal and Regulatory requirements and reassuring everyone that what is going on is socially legitimate in its soft non-legal sense.

    Crucially the code should assist companies in their relations with the capital markets. For a stock to be listed on the London Stock Exchange it has to sign a Listing Agreement. This requires a company to accept and abide by codes of good governance. In time we should aim for adoption of the principles of our code to be part of what the capital markets see as good governance. This is particularly important for companies that envisage an IPO – and there may be a good number of you in this audience – and all the due diligence that the financial markets require.

    To summarise now the practical steps I suggest we take:

    Step 1 I think we should have a steering committee composed of individuals with varied backgrounds in data mining, from techies to marketers who in a structured way identify issues that can be put to members seeking examples of moral issues arising both from morality and from their experience of the law – especially issues generated by disruption of old ways of doing things and development of new creative solutions which do not easily sit with those old ways.

    Step 2 the steering committee uses a bottom up approach putting issues it has identified to members whether through workshops or social media to uncover and prioritise moral concerns about which current practices and areas of developing technology and law raise moral issues.

    For example customer profiling through data mining holds new risks for individuals whether by enabling re-identification of anonymized data, or machine made decisions. It would be good to have practical examples of where this has happened, is happening or is going to happen and views on the morality of what is going on especially why something is right when the orthodoxy says it is wrong. Ambiguous and ambivalent situations are also very useful. Plenty of examples are an invaluable assistance in deriving a sensible principle for inclusion in a code.

    Step 3 A first draft of a relatively short and concise code is produced alongside an explanative text. It should be mindful of codes already produced in other jurisdictions e.g. the USA and seek to be consistent with the corporate governance codes used by the capital markets not least the London Stock Exchange. It should not seek completely to reinvent the wheel.

    Step 4 That draft is circulated to members for comment.

    Step 5 The second draft of the code is discussed with stakeholders, identified both by the steering committee and by members, and a third draft is produced reflecting stakeholder input and identifying contentious areas.

    Step 6 the third draft is circulated to members and through the social media for comment.

    Step 7 the forth draft is promulgated as the Professional Code of Ethics for Data Mining with the Steering Committee along with an accompanying explanatory text – thereafter preparing regular updates to take account of developing understanding and new challenges.

    I hope this talk has been useful as a stalking horse to get the ball rolling. Thank you for your attention.

    Michael Butcher

    Mike Butcher
    Towards a professional code of ethics
  • Dealing with uncertainties

    Dealing with uncertainties
    href= » »>Dealing with uncertainties

    Dealing with uncertainties
  • Data Visulisation Best Practices

    thumbnail of Data Viz Best Practices

    thumbnail of Data Viz Best Practices

    Presentation by Sophie Sparkes – Tableau Software – on 25th November 2015 for the Society of Data Mining

    thumbnail of Data Viz Best Practices
    Data Visulisation Best Practices
Go Back