Detecting Programming Languages in VS Code with ML

Machine Learning and VS Code

Since the August 2021 update of Visual Studio Code (version 1.6.0) , Visual Studio Code makes use of Machine Learning (ML) to detect the current programming language used while coding – this also includes code that is pasted into the editor.

Because of this, Visual Studio Code automatically (or rather, auto-magically) sets the language mode featuring the language’s specific language colorization and extension recommendations. All of this is made possible by – and powered by – TensorFlow.

In case you haven’t heard of Machine Learning (ML), let me quickly explain it so that you can understand the context here a bit better.

What is Machine Learning

According to Wikipedia: “Machine learning is the study of computer algorithms that can improve automatically through experience and using data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as “training data”, to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.”

Now the question is: how do you train data? One option is known as Deep learning.

What is Deep Learning

Deep Learning (or Hierarchical Learning or Deep Structured Learning) is a type of machine learning method that is based on learning data representations instead of task-specific algorithms. Deep Learning can be unsupervised, semi-supervised, or supervised.

Some Deep Learning architectures, such as deep neural networks, deep belief networks, and recurrent neural networks have been applied to the following fields:

  • Computer Vision
  • Speech Recognition
  • Natural Language Processing
  • Audio Recognition
  • Social Network Filtering
  • Machine Translation
  • Bioinformatics
  • Deep Neural Networks

Deep Neural Networks

Let’s see how Deep Neural Networks fit in with our human brains, so that you can have a better idea of what must be done to train machines with data.

A Deep Neural Network (DNN) is an artificial neural network that has multiple hidden layers between the input and output layers. Deep Neural Networks models are complex and seek to mimic human nervous systems, which consist of the following elements.


A neuron, also called a nerve cell, is the basic unit of the nervous system. Neurons are the unit which the brain uses to process information. Each neuron is made of a soma (cell body), axon, and dendrites. Dendrites and axons are nerve fibers.


Neurons do not touch; instead, they form tiny little gaps called synapses. These gaps can be electrical synapses or chemical synapses, but ultimately these gaps pass the signal from one neuron to the next. You get excitatory synapses and inhibitory synapses. Signals arriving at an excitatory synapse cause the receiving neuron to fire. Signals arriving at an inhibitory synapse inhibit the receiving neuron from firing.


An axon typically conducts electrical impulses away from the neuron’s cell body (soma). Axons, or nerve fibers, transmit information to different neurons, muscles, and glands. In pseudo unipolar neurons (sensory neurons), such as those for warmth and touch, the electrical impulse travels along an axon from the periphery to the cell body, and from the cell body to the spinal cord along another branch of the same axon. Nerve fibers are classified into three types: A delta fibers, B fibers, and C fibers.


Dendrites are the branched projections of a neuron that act to propagate the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons (usually their axons) via synapses which are located at various points throughout the dendritic tree.

Artificial Neural Network

An artificial neural network (ANN) is an interconnected group of nodes, like the vast network of neurons in a human brain. Neural networks consist of multiple layers and the signal path traverses from the first (input), to the last (output) layer of neural units.

The purpose of a neural network is to solve problems like the way a human brain would. Neural networks are based on real numbers, with the value of the core and of the axon typically being a representation between 0.0 and 1.


A Perceptron is an algorithm for supervised learning of binary classifiers which are functions that can decide whether input, represented by a vector of numbers, belongs to some specific class.


Layers are made up of several interconnected nodes which contain a sigmoid (activation function). Information is presented to the neural network via an input layer that communicates to one or more hidden layers. The actual processing in hidden layers is done with the use of weighted connections. The hidden layers then link to an output layer.

Single-layer Neural Networks

A Single-layer neural network is a network in which the output unit is independent of the other layers—each weight affects only one output.

Multi-layer Neural Networks

A multi-layer network is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs.

Forward Propagation

In a feedforward neural network, the information moves in only one direction, forward—obviously, from the input nodes, through the hidden nodes (if any), and to the output nodes.

Back Propagation

Back propagation is a training algorithm consisting of feeding forward values and calculating errors and propagating it back to the earlier layers.

Reinforcement Learning vs. Supervised Learning

Reinforcement learning differs from supervised learning in that correct input and output pairs are never presented, and sub-optimal actions are not explicitly corrected. There is a focus on on-line performance which involves finding a balance between exploration of uncharted territory and exploitation of current knowledge.

Hannes DuPreez
Ockert J. du Preez is a passionate coder and always willing to learn. He has written hundreds of developer articles over the years detailing his programming quests and adventures. He has written the following books: Visual Studio 2019 In-Depth (BpB Publications) JavaScript for Gurus (BpB Publications) He was the Technical Editor for Professional C++, 5th Edition (Wiley) He was a Microsoft Most Valuable Professional for .NET (2008–2017).

More by Author

Must Read