top of page

Site Search Results

92 items found for ""

  • What is representative sampling and how can it be used on two-dimensional areas?

    Representative sampling is a term analytical chemists use as a statistical basis for collecting samples from large sample sources that evaluates the whole source fairly. This is because when sampling from large areas, it would be possible for the person carrying out the analysis to directly influence the results of the analysis to further a hypothesis as a result from where the samples are taken from. So how can it be used on two-dimensional areas? How do I know if a sample is representative? There is no black-and-white answer as to whether a sample is representative or not. It is more of a scale and you, as an analyst, would want to ideally be towards the representative end of this scale without getting to the point of silliness where you are taking thousands of unneeded samples just for the sake of it. What is an example of representative sampling? A field next to a river could be contaminated with extremely high levels of chemical run-off from a production facility located upstream from the field. To improve the accuracy of the analysis, multiple samples would have to be taken. However, the distance from the production facility, as well as the distance from the river, would inevitably affect the results of the soil analysis. What is the next step? The chemical production facility could hire a team to investigate the field from which the complaint was filed. They may find the levels of whatever the contaminant may be are well within the safe limits, and no further action would be taken. However, this method would not work if they took samples that match the hypothesis they desire using unethical practices, such as taking all the samples from the field as far away from the river as possible and/or as downstream as possible. How do I plan my representative sampling techniques? When planning your sampling techniques, you must always try to be as unbiased and fair as possible when investigating a claim. Using the field example, as above, a common method for sampling would be to split the field up into quadrants. Then you would take a sample from each quadrant and form a map of how the concentration of said contaminant dissipates through-out the field. The smaller the quadrants, the more representative the sampling. Is there another sampling method I could use? Another method for sampling instead of forming this “map” would be random sampling. This is commonly used when the area is just not feasible to analyse in a quadrant manner due to its immense size. Where the quadrants are formed, a random number would be selected of its ‘X’ and ‘Y’ axis, taking samples from these points. It can be quite simple to be representative of a 2-dimensional area. However, when coming to analyse a three-dimensional volume of matter, things often become a little more complex.

  • Identifying background radiation sources in flame photometry

    Sources of background interferences are those that emit either light or electronic interference. These are usually cancelled out from the collected signal source via calibration of a blank, or via the use of grounding to eliminate the stray electrons present in circuitry during voltage measurement. Types of interference that affect the light emitted from the sample could be atmospheric effects, background radiation, entropy-based random emissions and also stray light sources. What are atmospheric effects? Atmospheric effects have a number of impacts on the solar radiation at the surface of the Earth. Such effects include oxygen and CO2 content. This affects the combustion of the fuel and the ambient pressure that affects the rate at which combustion occurs, as well as the ambient temperature. What is background radiation? Identifying background radiation is important due to the fact that background radiation that affects the detection of concentrations of ions in a sample during photometry is usually a secondary radiation from cosmic radiation. Cosmic radiation bombards the Earth's atmosphere with positively-charged ions ranging from protons up to iron (by atomic weight). This then converts to secondary radiation, including electrons alongside a smattering of other particulate and wave type matter, which then exist over the Earth’s surface. These electrons can be detected by a voltmeter, which detects the voltage emitted from the photodiode array. This is cancelled out with a blank calibration. What is grounding? Grounding is a technique that electrical engineers utilise to form a common return path for electric current or directly connect the current to the Earth. This limits sources which could be picked up by the voltmeter and confused with actual current detected from the photodiode array, from the photon packets emitted from the excitation of the analyte. What is stray light? Stray light is another source of background radiation emitted from the sun in the form of photons. In the sun, elements range from hydrogen up to iron. Therefore, included in the sun's chemical makeup would be quantities of sodium, potassium, calcium, barium and lithium. All of these would be superheated and constantly radiating photons from excitation due to the sun's extreme temperatures. This means the light that surrounds us is partially of the same wavelength which the photodiode arrays could detect, and is another source of interference that the blank standard cancels out via calibration. What are entropy-based random emissions? Entropy-based random emissions are a type of emission that are due to the absolute random nature of particle-to-particle interactions. If a puddle of water can slowly evaporate due to this type of interaction - where water molecules of random energy levels interact some can break from the aqueous form and turn to the gaseous form - then it would be indicative that the excitation of atoms or ions could also occur due to random particle collisions. This type of emission is totally random. However, it is very minute and should not affect the signal detected very much.

  • What is lattice energy and how does it influence flame photometry?

    Lattice energy is the energy needed to separate a mole of an ionic solid into gaseous ions. Its internationally recognised definition is “a measure of the energy contained in the crystal lattice of a compound, equal to the energy that would be released if the compound were brought together from infinity”. So, the assortment of positive and negative ions released when combined together in sum total is the lattice energy. How is lattice energy formed? Lattice energy is held by a crystal structure. However, it may not be indicative that energy is actually released when these ions are combined. This is due to entropy - a quantitative measure of what the second law of thermodynamics describes: the spreading of energy until it is evenly spread. A crystal structure is a very rigid and ordered structure so, to form this structure, energy must be taken from elsewhere; i.e. the magnetic pull of a positive and negative ion. As the kinetic energy of the pulling motion is changed into a “holding” energy between ions, this forms the crystal lattice. What is an example of lattice energy? One of the most common examples of lattice energy is found in sodium chloride (NaCl). The lattice energy for this is the energy released when gaseous sodium (Na+) and chlorine (Cl–) ions come together to form a lattice of alternating ions in the NaCl crystal. Here, the negative sign of the energy is indicative of an exothermic reaction. What influence does lattice energy have on flame photometry? Some types of interference commonplace in flame photometry are due to the formation of compounds which the temperature of the flame is unable to break down. From our knowledge, these ionic compounds are ligand-based. This means they have strongly-bound ionic bonding from multiple negatively charged ‘ligands’ onto the central metal ion. This in turn pulls on the electronic shells of the metal, thus affecting the light emissions and the spectra produced from the ion when heated in a flame. The strong binding force of some of these ligands is known as the lattice energy of a crystalline structure. It could be considered the equivalent of the pull between two magnets. The value of these lattice energy is usually a negative figure, meaning that energy needs to be put into the system to break this bonding energy. All ionic species are crystal structures, and a valid example of this type of interference in photometry is the presence of sulphates when analysing calcium samples. Where the signal produced is significantly depressed by the presence of sulphates, due to their strong lattice energies.

  • Can I use a phase diagram to determine states of matter?

    When it comes to physics, a state of matter is one of the distinct forms in which matter exists. It’s understandable that you automatically think of the ‘big three’, which are solids, gasses and liquids. That may also extend to plasmas, or maybe even Bose-Einstein condensates. The latter is a state of matter which is usually formed when a gas of bosons (particles) at low densities is cooled to temperatures close to absolute zero. Phases like these are better-suited to describing what state of matter is present in a system than any single definition. Identifying the phases of matter Imagine a single ice cube in a closed system, where no heat light or energy is escaping. Yes, the ice is solid and, yes, there is gaseous water vapour and liquid water present in the system. With the application of phases to this problem, though, it gives scientists a statistical look at the overall system, with respect to the thermal energy and pressure of the system. In this system of ice, there would be all three states of matter present. However, the majority would be in the solid state, so the phase of the overall system would be known as being in the solid phase. If this system had its temperature increased significantly, this could be totally different. Using a phase diagram to determine states of matter A phase diagram is a chart used to show conditions at which thermodynamically distinct phases occur and coexist at equilibrium. These diagrams are useful tools for scientists to investigate what state the majority of matter present in a system is at a given temperature and pressure. There are a few important points on these graphs which are of note. The critical point is a temperature at which liquid and solid matter can no longer exist. Meanwhile, the triple point is where all three standard states of matter exist at the same time. Such points are used in a multitude of equations relating to states of matter and pressure temperature calculations for thermodynamics. Below is a phase diagram for water as an example. Can phase diagrams be used for other states of matter? Phase diagrams can be increased to also include where plasma formation occurs, as well as where other states of matter occur. However, they would be far further down the axis of temperature and pressure. This is due to the immense pressures and temperatures required to form plasmas and other states of matter. Just think how much energy and pressure is present in the sun where these states occur.

  • The scientific method – from conjecture and hypothesis to predictions and analysis

    Having the ability to question what is all around us, is vital to our progression as the alpha species on Planet Earth. But how did we go from asking questions to logically rationalising our natural environment through conjecture, hypothesis and predictions, and thus the definition of the scientific method? What is conjecture? Conjecture is defined as deriving a predicted outcome from our current understanding and knowledge. From a conjecture we then can proceed onto a hypothesis. What is a hypothesis? A hypothesis is defined as a possible explanation for a conjecture. When planning an experiment, it is very easy to skip over actually defining its conjecture and hypothesis. The experimental process then seeks to either prove or disprove your hypothesis. It’s easy to get hung up on making sure your hypothesis is always correct. In reality, it does not matter if your hypothesis is wrong or right. What is a prediction? The next step in the method after defining your hypothesis is a prediction. This is the logical consequences of your hypothesis. If ‘X’ does do ‘Y’, then property ‘Z’ would be indicative of ‘Y’ and, therefore prove ‘X’. This is a common prose for a prediction, such as during the analysis of the shape of DNA. What is the DNA method? Assigning a prediction for the work Francis Crick did on the structure of DNA, “If DNA had a helical structure, its X-ray diffraction pattern would be X-shaped.” After a prediction is made, you can move on to the main course of the scientific method: testing. To go back on the DNA method, the testing done of that prediction would be the actual crystallisation of pure DNA structures and then diffracting X-rays through them and analysing the diffraction pattern. Analysing the testing Through the data collected during testing, a final conclusion is formed based on the facts that are determined. This step is the marriage between hypotheticals and solid evidence. Analysis can sometimes be wrong, but this allows for better understanding and knowledge for future testing.

  • What is spectral interference, when does it occur and how can it be minimised?

    Spectral interference, or spectral overlap, is a term used by scientists who are interested in looking at the emission wavelengths of elements to classify data from a source of excited ions which contain a mixture of elements. It is defined as an absorbing wavelength of an element, not being determined but present in the sample which falls within the measuring line of the analyte of interest. The absorbance of the element will be measured together with the analyte of interest, increasing the detected emission wavelength. This bloating effect of the detected wavelength would then cause an increase in signal, telling the instrument that more analyte of interest is present in the sample than there actually is. When can spectral interference occur? An element emits a unique spectrum of light when excited by an energy source. This spectrum of light can be subcategorised into strong emission lines (meaning that these wavelengths are being emitted frequently) and weak emission lines (wavelengths are emitted infrequently). When we look at the emission spectra of Calcium in comparison to Potassium and even Sodium, we can see that in their emission spectra, the spectras overall are unique. However, if you were to list every individual wavelength, you would notice correlation between a few radiative emission lines. This is known as spectral interference. This can be classified as a type of matrix interference, as it is the mixture of different elemental species being present in the solution. However, it is not the only type of matrix interference that is possible. See phosphate and sulphate interferences of calcium for an example of non-radiative matrix interferences. How can the spectral overlap effect be stopped or minimised? A physical solution to this problem is by way of sample and calibration standard preparation. This involves introducing the element that is causing the signal increase into all of your standards as a blank, so that it is subtracted from the sample’s signal via the calibration curve. Alternatively, the BWB-Tech range of flame photometers include functionality that allows for this spectral overlap to be calibrated out of the curve and therefore reduces, or completely removes, the impact on the sample analysis.

  • What is the difference between a covalent bond and a dative bond?

    Today we discuss the difference between the two bonds. A covalent bond is a type of bonding which occurs between two atomic nuclei. This is where the electron structure of the atoms becomes shared between each nuclei in either a single, double or triple bond to give an overall more stable structure in the current environment that the atoms exist within. A single covalent bond is where the spherical S orbitals of a pair of atoms combine to form what is known as a sigma bond. To form a double bond, the P orbitals must be utilised. What are P orbitals? P orbitals are dumbbell-shaped orbitals that position in 90-degree angles from one another. Due to the shortest distance between two atoms already being occupied by a sigma bond, the two atoms P orbitals would then align outside this axis forming a second bond. A triple bond is where a second set of P orbitals at a 90-degree rotation to the double bond form. Where does covalent bonding occur? Usually in covalent bonding an electron for a bond would be supplied from both of the atoms to form a total of two electrons. In some cases, both electrons can be supplied from a lone pair of electrons of a single atom which conjugate towards a positively-charged nucleus. In aqueous solutions, however, it is a commonplace for ions to exist which, when positively charged, become very dense with charge (size vs. charge ratio). This high charge density allows what is known as lone pairs of electrons to become attracted to the positive charge of the ion. This is similar to a pair of magnets, due to electrons having a negative charge and the ion having a positive charge or vice versa. What is a dative covalent bond? When a collision occurs between the two charges, if the special alignment is correct and there is sufficient energy in the collision, the two electrons in the lone pair can form a type of covalent bond with the ions nucleus, where both electrons were supplied from a single molecule. This type of bond is what is known as a dative covalent bond. The effects of these dative covalent bonds upon a metallic ion, such as the ones measured in photometry, can have some interesting effects upon the overall electronic structure of the metal ion. This can affect the way in which it undergoes excitation and, therefore, how it interacts with a flame photometer.

  • What is the visible light spectrum and can the human eye see it?

    The visible light spectrum is a portion of the electromagnetic spectrum picked up by the human eye. This spectrum ranges between 380 and 740 nanometres by wavelength, and between 430 and 770 Terahertz in frequency. The visible light spectrum makes up 100% of all light humans are able to observe. How is the visible light spectrum detected? The visible light spectrum is detected is via a triplicate of cone cells and rod cells, as well as intrinsically photosensitive retinal ganglion cells. These three types of cells are classified as photoreceptor cells due to the presence of proteins which absorb photons of specific wavelengths. This triggers a change in the cells’ overall electrical charge, otherwise known as membrane potential. The change in potential is then carried to the brain for processing into an image. What are rod cells? While highly sensitive to photons, rod cells do not differentiate greatly between wavelengths of light and colour. They are primarily used as a night-vision aid and of course, as the name would imply, they are shaped like rods. What are cone cells? Cone cells transmit a different membrane potential dependent on the wavelength of the photon it has absorbed. They function optimally in high-intensity light, such as during the day. Cone cells are less sensitive to photons than rod cells, and this is expressed through the pupil dilating during low-light situations to allow additional photons into the cornea for detection. These cells are very tiny and densely packed into the eye to maximise cells over a relatively small area, such as the human eye. What are intrinsically photosensitive retinal ganglion cells? Intrinsically photosensitive retinal ganglion cells are a type of cell found deeper in eye tissues than the cones and rods, which are placed on the inner surface. While these cells can detect light from the visible light spectrum, the processing of information is not based upon image formation like the rod and cone cells. Instead, they are more towards aiding and abetting the flow into human's circadian rhythm (the part of your brain called the hypothalamus). These cells contribute to the photon-based release of melatonin, a hormone which aids sleeping, as well as sending information to the brain which controls our circadian rhythm. While this range of electromagnetic radiation is what we humans can see, it is actually unique for all organisms with eyes. Bees, for example, see the world in UV-light, which is outside of the spectrum available for human eyes to detect. These differences are mainly due to the evolutionary requirements of a particular species.

  • What is a laser and how does it work?

    A laser is a device which stimulates molecules or atoms to emit light at particular wavelengths. It will then amplify that light and typically produce a very narrow beam of radiation. What many people perhaps don’t know is that the word ‘laser’ is actually an acronym for ‘Light Amplification by Stimulated Emission of Radiation’. What do we use lasers for? Lasers are used in a number of items we use every day. They include everything from optical disc drives and barcode scanners, to printers and entertainment systems. Lasers are widely used for material processing in manufacturing, such as for drilling, cutting and welding. Other uses include DNA sequencing instruments, fiber-optics, laser surgery and skin treatments. Where did lasers come from? The first working laser was built in the 1960s by Theodore H. Maiman at Hughes Research Laboratories. His invention was based on theoretical work by Charles Hard Townes and Arthur Leonard Schawlow. This was a Chromium dosed ruby which produced ruby coloured laser light at 694.3nm. How did this early laser work? A rod of this chromium-dosed ruby was encapsulated within a set of high-energy flash tubes, backed by mirrors. When these flashtubes are lit by an electrical current, the light emitted is bounced between the mirrors and through the ruby rod infinitely. This high-energy light then causes the ruby rod to fluoresce in a process named stimulated emission. If you think back to our explanation of how ions are excited in a flame photometer (*LINK TO BLOG HERE*), in a laser these already-excited ions are hit with a light of a specific wavelength which then causes the emission of light and a drop back into the ground state. How do mirrors work in a laser? The presence of mirrors in the system means that until the port to ‘beam’ the light out of the ruby rod is opened, a greater and greater quantity of light is stored being bounced between the mirrors and through the core. This stimulates more and more chromium ions to emit light and also keeps these ions excited for further lasing, which is known as population inversion. What is population inversion? Population inversion is another important piece of the puzzle of lasers. This is a concept of statistical mechanics which refers to the ratio of excited to unexcited ions in a system. Our system in this case would be the ruby rod, and our ions the chromium found within it. In a ruby, population inversion is the defining factor in the stimulated emission of light. As the laser light explicitly comes from the already excited ions in the ruby, having a higher concentration of excited ions than unexcited ions would mean that more laser light is emitted from the ruby rod.

  • What outside influences can affect a flame photometer?

    When using a flame photometer, flame stability can be affected by a number of different factors and outside influences. These include atmospheric conditions such as humidity, oxide concentrations, pressure changes and oxygen content of the air. But what about the stability and reproducibility of results? Here, we will focus on how the instrument itself can affect reproducibility and stability of the flame. The samples and standards themselves can affect reproducibility - something that seems quite simple. However, it can be caused by a huge variation of different factors in play coming together. This would be the viscosity of the solution, or the ease at which the sample flows. What causes differences in viscosity between solutions? One factor which can cause differences in viscosity between solutions is ambient temperature. More energy in the liquid would increase its fluidity, and therefore the rate at which all of the mixing kinetics occurs. This is due to the energy of the system being higher and making it easy to break the intermolecular hydrogen bonds in the solution. What other factors can have an effect? There is also pH, where if there are any proteins in the solution, can cause huge variations in viscosity in a sample. This is due to the acid or alkaline causing the overall structure of the protein to contract or expand. Another one which is similar to the mechanism of how pH and proteins work would be the total dissolved solids (TDS) of the solution. TDS is used to describe the inorganic salts and small amounts of organic matter present in solution in water. Obviously, a higher TDS solution would be more viscous than that of a low one. In relation to the TDS of a solution, let's discuss the mixing chamber. It all comes down to ensuring that as close to a fully homogeneous gas as possible is passed through the burner head for combustion. If the gas is “chunky” and not mixed properly, there will be a nonlinear burning in the flame and therefore unstable results. How does this relate back to the TDS of the solution? In the mixing chamber, when high concentrations of dissolved solids are added for mixing, droplets can form on the chamber surface. These can then grow and grow, until they then drop off into the chamber's vortex and the droplet is carried into the flame. However, this droplet could be millions or billions of times the size of the droplets that are emitted from the nebuliser, which can then be heated by the flame. Of course, this would emit a huge spike in emissions from the sample. Essentially, the stability of the flame is more of an artform than a science and is a balancing act between so many different variables.

  • Selecting specific ions for analysis in flame photometry

    At BWB Tech, we use flame photometry to measure how bright light appears to the human eye. We use photometry to analyse Sodium, Potassium, Calcium, Barium and Lithium. All of these are of the period of alkali or alkaline earth metals in the periodic table. It is also possible to analyse other elements in these two periods of the periodic table. Due to the rarity of these elements for the standard applications our customers purchase them for, though, they end up being unused. Our team can, however, help in selecting and customising a photometer for your own personal needs. What other ions can be analysed by flame photometry? There are a number of other ions which could be useful to analyse with flame photometers. They include Lead, Arsenic and other toxic metals which can be found from chemical run-offs. The location of the leak can be found by tracking the concentration as it increases up a water source. However, it is just not feasible to do so, and here’s why. What makes an element suitable to analyse with a photometer? When using a flame photometer, increasing the temperature of the flame increases the quantity of emitted wavelengths. It does this by increasing the ratio of unexcited atoms to excited atoms. Elements in the alkali or alkaline earth metals periods only have two electrons in the outer shell. Therefore, they require far less energy to remove that the other elements of the periodic table. As you move across from left to right of the periodic table, the amount of energy to excite an atom increases massively. This increase of energy required reduces the ratio of excited to ground state ions. It also adds further jumps, which emits more numerous wavelengths, thus diluting the intensity of the single wavelength that the photometer’s photodiode analyses to determine the concentration. When is an element not suitable to analyse with a photometer? If the application you want requires the analysis of multiple ions (10-plus elements) or elements found on the far right of the periodic table in the metalloids or transition metal groups, you should consider ICP (Inductively Coupled Plasma) analysis. Due to the plasma forming at such a high temperature, all the electrons from the ion are stripped. So, the heat required to analyse these elements is possible. A flame photometer is the quickest and easiest method to analyse five elements in fields such as medicine biology and chemistry.

  • What is the Venturi effect and how does it work in a nebuliser?

    A Venturi or Venturi tube is a system used to speed up the flow of fluid by constricting it in a cone-shaped tube. As it is restricted, the fluid must increase its velocity, which reduces its pressure and produces a partial vacuum. As fluid leaves the constriction, its pressure then increases back to pipe level. What is the Venturi effect? The Venturi effect is named after its discoverer, Italian physicist Giovanni Battista Venturi. The main operating principle of the Venturi effect is that the velocity of an incompressible liquid - in this case either your sample or standard - increases as the diameter in which the fluid passes though also grows proportionately. In layman's terms, as the diameter of the tube decreases, the same amount of energy and mass is contained within a smaller space and must therefore be expelled with a greater magnitude of velocity out of the system. This relates back to one of the most basic principles in science of Newton’s third law of motion. That every action must have an equal and opposite reaction. In terms of fluid dynamics, the continuity equation can also be applied to the Venturi principle. Its official definition is “the continuity equation states that the rate at which mass enters a system is equal to the rate at which mass leaves the system plus the accumulation of mass within the system”. What are the limiting factors in the Venturi principle? There is one main reason for the limit of the Venturi principle. Where fluid velocity approaches the speed of sound, the flowrate through the nebuliser hits a critical point where any further pressure decrease downstream will not result in a net flow rate increase. This can cause the venturi effect to stutter or even cease to exist entirely. How does a Venturi work in a nebuliser? With a nebuliser, an aerosol is generated by passing air flow through a Venturi in the nebuliser body. This then forms a low-pressure zone which pulls droplets up through a feed tube from a solution or suspension of sample into the nebuliser body. In turn, this creates a stream of atomised droplets which flow into the mixing chamber. Higher air flows lead to a decrease in particle size and an increase in output. The nebulisers we manufacture for all our BWB-Tech flame photometers utilise the Venturi effect to form a fine and stable mist that does not change. To do this, the air compressor must emit a very constant pressure and the needle and orifice must be cut with precision to ensure regularity in their circular shape. A small deviation from this will cause fluid to impact the irregularity and build up into a small droplet that will then be expelled from the orifice. This can then get pushed into the burner head and cause small spikes in the light emitted as a larger mass of ions have been excited in the flame. The continuous development of our nebuliser over several years gives you the most stable readings possible from your instrument.

  • What factors can affect your results when using flame photometry?

    When carrying out any form of analytical chemistry, it is common for some to achieve one set of results, only to then repeat the process and get a totally different outcome. Similarly, if two laboratories are studying the same thing, each may run the same test on the same sample but find massive fluctuations in their results. There are a number of different reasons that can lead to this conclusion. What factors may influence the intensity of ion emission in flame photometry? When using a flame photometer, the most obvious point of contention is the stability of the flame. When you light a candle, you’ll notice how easily someone on the other side of the room can make the flame flicker. A gas-burning flame photometer mechanistically burns more stable than a solid paraffin wax/wick candle. This is due to the pressure of the gas through the burner head. What causes low flame stability in flame photometry? Any tiny changes in the pressure of the surroundings of the flame can cause very low flame stability. The light emitted from the flame to the photodiodes can cause huge offsets in the output of the diode. This would also be indicative of general atmospheric pressure, meaning variation of results can occur. For example, a laboratory at sea level would find a flame burns brighter than one burning on top of a mountain. This is due to the fact that the oxygen content of the natural air is higher at sea level. This would result in a faster and more complete combustion of the fuel, resulting in more light being emitted from the sample to be collected by the photodiodes. What might affect the temperature of a flame in flame photometry? The amount of oxygen present will always affect the temperature of a flame. However, other factors such as oxides being present in the atmosphere, would also result in a lower combustion ratio. Oxides such as Carbon dioxide, Nitrogen dioxide and Sulphur dioxide would all, to some degree, effect the burning of the flame and, thus, its stability. This is due to the way that all reactions work. Even during a combustion reaction, the ratio of products to reactants in the equilibrium affects the rate at which gas burns. Other factors in your lab which can affect the results of the testing are humidity and pollution levels, especially as these can both fluctuate. How can I control my results when using flame photometry? The factors mentioned above can be controlled via the calibration curve and your standards, as the results would then be calibrated for your environment. This is why it is so vital that you do not let a single calibration curve remain on an instrument over a few days or multiple analyses. The BWB Flame Photometers also utilise sealed chimneys ensuring that only filtered air is passed to the flame and with the novel built in compressor and gas regulation system the finite control of a flame is vastly more achievable. Regardless of how clever the instrument is though at controlling the flame conditions, it is still important to calibrate your instrumentation regularly throughout the day, even if you are testing the same sample type. Remember, sample analysis can only be as precise as your last calibration.

  • How has Albert Einstein’s work influenced modern-day scientific methodology?

    In the last 100 years, the technological prowess of the human race has undoubtedly been the human race’s greatest achievement. Albert Einstein famously published the Theory of Relativity in the early 20th century. Since then, modern-day scientific theory has been the driving force behind everything. Einstein’s theory of special relativity came to the fore in 1905, building on theoretical results and findings by the likes of Albert A. Michelson, Hendrik Lorentz, and a number of other physicists. Then, between 1907-15, Einstein developed general relativity, with contributions subsequently made by many others. The final form of general relativity was published the following year. However, how have Einstein's methodologies changed over the years, and how many are used today by modern scientists and are people still as influenced and they once were by these methodologies? Building on Einstein’s theories in 1926, Ronald Fisher, a British statistician, geneticist, and academic, popularised his work upon Randomized design. This involved bringing in the application of independent variables alongside dependent and control variables. Fisher’s work on the actual physical design of an experimental process was revolutionary. It also led towards the massive data collection upon which the modern-day scientific principle is based. What were the main concerns with the experimental method? Ronald Fisher’s main concerns with his experimental method were established upon validity, reliability and replicability. The introduction of proofs and how valid the experimental process was in the modern day is vital in a world now filled with rampant pseudoscience theories. In 1937 the first placebo experiment was published. This work began to bring to light the effect of us as humans and our influence upon the scientific method. It also pushed thinking towards the attempt to reduce bias towards the outcome of scientific work. How did computers revolutionise scientific methodology? At the end of the Second World War, economies across the globe were experiencing a post-war boom. This also became known as the Golden Age of Capitalism. The first computer simulation was formed, laying the groundwork for something revolutionary: the marriage between science and technology. The ability to simulate natural environments while controlling different variables gave scientists a huge opportunity to investigate and collect data upon processes that are infinitely complex. These infinite outcomes and random possibilities were all then transferred in an easily workable computer programme. Looking ahead to the future of science In 2009, a “robot scientist” was developed with machine learning algorithms. It was able to perform an independent experiment, test hypothesis and interpret findings. With the progression of AI (artificial intelligence), scientific findings could be done fully independent of human touch. It’s tough to predict what the future holds, but we’re looking forward to finding out!

  • What is sodium, where is it found and why it's important?

    Sodium is a chemical element and the most common metal found in the universe. It is a soft metal which can be easily cut with a knife. When freshly cut, it will have a dull, greyish metallic colour which will quickly oxidise to a white/grey colour due to the oxygen in the atmosphere. But why is the element important? Where is sodium in the periodic table? Sodium is an alkali metal, located in group one of the periodic table. The chemical symbol for sodium is Na, which is abbreviated from the Latin word, natrium. This is in reference to the Egyptian natural mineral salt, natron, which mainly consists of sodium carbonate (hydrated) or Ca(CO3)2 . This abbreviation was first published in 1814 by Jöns Jakob Berzelius in one of the early systems of chemical symbols. Sodium actually comes from the Arabic word, suda, meaning headache. This is due to sodium carbonate being used in Arabic culture as a headache remedy. It also has roots in medieval Europe, from the old headache remedy, sodanum. What is sodium used for? Sodium plays such a vital and huge part of the world in which we live. It is normally one of the first chemical names of anything that you would learn in school, being generic table salt NaCl. Sodium is commonly used in the production of titanium, sodamide, sodium cyanide, sodium peroxide, and sodium hydride. It is usually found in its ionic form as Na(+) due to elemental sodium being highly reactive with water. Even if there was a natural process for the formation of elemental sodium, any moisture that would come into contact with it would quickly erode it or cause it to react. What does sodium do to your body? In the human body, sodium is an electrolyte needed in reasonably high levels It is controlled by the kidneys and urine to regulate the salt levels in your body. Sodium levels themselves control the volume of blood in the body. This is a major factor in a person’s blood pressure value. Can you detect sodium using flame photometry? Shortly, yes. It’s ideal for Flame Photometric analysis. Sodium has a full 1P and 1S orbital after the ionisation of the element losing its 1s1 electron from the outer shell to form the Na+ ion. The element also cannot exist in its metallic form when given contact to water. This ensures all of the sodium present in the solution would be in the ideal form to determine its concentration via photometry. BWB offers several different models all of which include simultaneous sodium detection along with a variety of other periodic table group 1 & 2 elements.

  • What is an aerosol and how is it used in flame photometry?

    An aerosol is a suspension of fine solid partic­­­les or liquid droplets in air or another gas. Aerosols occur naturally and can also be produced via different mechanical methods. Here, we break down the properties of an aerosol and how these properties make it ‘stable’. What is the most important property of aerosols in flame photometry? Particle size distribution is the most important property of aerosols in flame photometry. Ideally, it would be as small as possible and the diameter variance between different particles would be zero, meaning they are all the same size. In the real world, however, when the sample is pulled through a nebuliser (a device for turning solution into a fine spray) and turned into an aerosol, the size of the particles released into the aerosol will vary. The size of the particles of water in the aerosol are the medium that brings the ions dissolved in the solution from the liquid sample to the burner head for detection. Therefore, the surface tension of the liquid medium is of vital importance to the size of aerosol particles. What happens when surface tension increases? The particles of water that break off from the liquid solution would be larger, as the binding force of the sphere is much greater. This requires more energy which is supplied in the form of kinetic energy from the pressure forcing through the orifice of the nebuliser. Particle size distributions when recorded graphically (‘X’ axis being size and ‘Y’ being quantity of particles) are usually a few peaks that form a classical bell curve. The size of the particles would seem to average out over the curve of each peak, meaning that the width of the bell curve would be the range of particles emitted from a single site, and the area under the peaks would be the total amount of particles emitted from the nebuliser. What happens when the particle size is not sufficiently small enough? When this happens, the aerosol can begin to revert back into a sitting liquid. The particles would collide with the walls of the mixing chamber and, if hit with sufficient momentum (based upon their speed as well as mass), it could break the surface tension of the particle and cause it to stick to the wall. Other particles may then collide onto this droplet, causing it to expand and then eventually could break off back into the aerosol mixture. This could then be pulled up into the burner head causing a huge spike in concentration read by the photodiode array.

  • The history of the flame photometer – from invention to modern-day use

    It has been nearly 150 years since Paul Champion, Henri Pellet and Charles Grenier developed a method of analysing the concentration of sodium in plant ash. Today, flame photometry is used around the world every day in a range of industries. However, during the 19th century, the methods for the chemical analysis of concentration were limited. But how have they evolved over history? One of the standard methods at the time would have been titration. Titrating metal ions poses difficulties though, which cannot be overcome in some cases, including with sodium. But how has the instrument changed from invention to today's modern-day use? How did Champion, Pellet and Grenier overcome these difficulties? In 1873, the trio developed a method by having two ethanol lamps parallel to each other. Next, they introduced a sample of sodium plant ash dissolved in water and nitric acid, after being filtered into one lamp. Then, with the second lamp, a series of calibration standards were produced by dissolving solid sodium into water and the flames compared. What are the problems with this method? Using this method, there is no way to accurately collect the intensity of the light emitted, apart from using the naked eye to make a comparison. As noted in further work by Gouy in 1877, there was no realisation the quantity of sample introduced to the flame had a large influence in the same manor the concentration had. Finding a solution to the problem Just four years later, Gouy designed a pneumatic atomiser to introduce a set quantity of sample and standard into the flames at the same time. Upon removing the vital flaw from the method, the very first instrument for the quantitative analysis of sodium in water samples had been developed. Gouy also developed a starting mathematical analysis of how the intensity of the radiation given off from a flame is directly proportional to the size and concentration of the sample. How accurate was Gouy’s method? The final accuracy and precision of Gouy’s method, after the sample size addition was brought from two to five per cent in the first method, was consistently less than two per cent margins. However, as it was still being done by eye, the method was very tedious due to having to remake a lot of standards every time. This was also the best way to increase the accuracy of the method. How has that method changed? Fast-forward to today, and it’s interesting to note that a lot of the early developments in chemical analysis were mainly done with at least some footing in biology. Early pioneers would have measured plant ash in photometry, or colorimetry which was designed by a biologist to analyse coloured chemicals present in flowers. This invention is still being developed further, and here at BWB Technologies we are still pushing the boundaries almost 150 years later.

  • The basic chemistry equations you need for using a flame photometer

    When using a flame photometer, you will eventually reach a stage where using mathematics is unavoidable. This handy guide should help you fix any issues which may arise, with information about the most basic needs in general chemistry equations for flame photometry. How does concentration change with dilution? In flame photometry, most of what you will be figuring out will come in either a liquid or solid form. Experts generally use a flame photometer to measure the concentration of ionic species in aqueous solutions. Often, you will need to change the concentration of a solution by changing the amount of solvent. Dilution is the addition of solvent, which decreases the concentration of the solute in the solution. Concentration is the removal of solvent, which increases the concentration of the solute in the solution. How do I calculate dilution? Mixing liquids together and altering concentrations can seem complex. It’s easy to get overwhelmed if you do not have knowledge of this simple equation which can fix the issue. The formula for calculating a dilution is: C1 x V1 = C2 x V2. Broken down, this is where: · C1 is the concentration of solution one · V1 is the volume of solution one · C2 is the concentration of solution two · V2 is the volume of solution two What this comes down to is the relationship between volume and concentration of mass. Here, the units of volume and concentration must be given in the same form. So, if the volume is in litres, the subsequent concentration must also be given in X per litre. When calculating this, you will commonly need to divide by 1,000 to convert ml to litres. When can I use this equation? When a 1Mol/L sodium solution in a 10ml beaker is diluted to 0.1L, the equation for the final concentration of sodium is defined as follows: · C1 = 1Mol/L · V1 = 10ml · C2 = Unknown · V2 = 0.1L As V1 is in the wrong form of unit for this calculation, it must be converted to litres to fit with the rest of the units: · C1 x V1 / V2 x 1000 = C2 · 1 x 10 / 0.1 x 1000 = 0.10Mol/L This is known as a 1:10 dilution – taking a single unit of solution and diluting it with 10x the amount of solvent, resulting in a 10x lower concentration of the solute. How does this equation actually work? It all comes down to the conservation of the mass diluted in the solution. Concentrations are given in the units of mass per unit volume so, for every X much volume, there is X much mass. By multiplying the volume, we are only left with mass. What the equation actually states is that the mass diluted in the solution is constant over course of the dilution.

  • Are sodium and hydrogen suitable elements for flame photometry?

    Not all elements are detectable by flame photometry. When using a flame photometer to detect the concentration of an element, this element must exhibit a specific set of prerequisites to make it suitable for analysis. The similarities between sodium and hydrogen would suggest that you could measure the concentration of hydrogen ions in solution and the acidity of a solution via flame photometry. However, this is not the case. What are the similarities of hydrogen and sodium? Hydrogen and sodium are both located in period one (the first row of the periodic table). This means they both have one electron in their outer shell in their atomic form and are relatively small in size. They both also have a high charge density in their ionic form from the total loss of their outer electron shell (the 1sX orbital), leading to contraction of the ion's diameter. Can you detect hydrogen using flame photometry? No. The main factor that prevents hydrogen being analysed by photometry is its lack of electrons. In fact, it only has a single electron present in its entire structure. The ionisation of this element to its ionic form would result in a single proton, a proton and a neutron or a proton and two neutrons, dependent on the isotope. Due to the lack of electrons, there is no possibility for the promotion of an electron to a higher energy state and, therefore, no possibility for the relaxation and release of a photon. Can you detect sodium using flame photometry? Sodium has a full 1P and 1S orbital after the ionisation of the element losing its 1s1 electron from the outer shell to form the Na+ ion. The element also cannot exist in its metallic form when given contact to water. This ensures all of the sodium present in the solution would be in the ideal form to determine its concentration via photometry. How does the ionisation of sodium affect its diameter and charge density? In its elemental form, the diameter of sodium is 154 picometers, in comparison to its ionic form which has a radius of 116 picometers. This is an overall contraction of ionic radius size of 24.7%. To summarise, the only reason hydrogen cannot be analysed by photometry due to its lack of electrons. Comparing hydrogen to a suitable element such as sodium, will hopefully expand your insight into element selection for flame photometry.

  • Properties of light – The basics of reflection and refraction

    The basics on reflection and refraction When light travels through an object, it can be affected by the properties of that medium in different ways which also affect the light. For example, from the sun, light travels through the vacuum of space largely unaffected until it reaches Earth. Upon reaching the planet it would meet with the upper atmosphere, where it then interacts with the Ozone layer. Here in the upper atmosphere, these powerful rays are reflected back off into space AND also allowed to enter the atmosphere. This also protects us from a majority of the sun’s radiation. What is reflection of light? Reflection occurs when light bounces off a certain object and The reflection of light can be roughly categorised into two types: Specular reflection – this is light reflected from a smooth surface at a definite angle. Diffuse reflection – this is produced by rough surfaces that tend to reflect light at many angles. You will see far more occurrences of this type of reflection in our everyday environment. What is refraction of light? Refraction is the bending of light. When entering into a new medium (e.g. from a vacuum to air), the path of light is bent slightly. This is due to a property called the refractive index. It is a dimensionless measurement of the speed in which light travels through the medium in question. You may think that the speed of light is a constant; however, the constant is actually measured in a vacuum. In other mediums, the speed at which light travels through the medium is dependent on the medium itself. How do different speeds result in a bending of light? When a light is shone at a prism at a 45-degree angle, one corner of the beam will hit the surface before the other. This means that at one instance of this event, part of the beam is released at the same time frame from the source which is both in and out of the medium in question. This would mean that part of the beam travels faster which is inside the medium is travelling at a faster rate than that of its non-medium travelling portion of the beam. This results in the beam angling further into the medium to account for this change in speeds, and thus resulting in refraction. When does total refraction occur? It is not common for total refraction to occur. Part of the beam would not be permitted to enter into the medium and instead, would be fired back out of the medium upon impact with the surface. As with all things, it would then follow Newton’s laws of motion and result in an equal but opposite angle of deflection away from the surface. This is what is known as reflection.

bottom of page