What factor determines electronegativity?

1 Answer
Aug 20, 2015

Given the definition of electronegativity, the determining factor is likely nuclear charge (i.e. #Z#, the atomic number).

Explanation:

Electronegativity is conceived to be the ability of an atom in a molecule to polarize electron density towards itself. Note that I write "conceived", because it is not a fundamental atomic/molecular property, such as ionization energy, or dipole moment. Given this definition, it is easy to see why those elements towards the right hand side of the periodic table are more electronegative than those on the left. First row atoms such as fluorine and oxygen have high nuclear charge (they are on the RHS of the table); it make sense that these atoms are considered highly electronegative on the various scales.

So, the obvious follow up question is why are, say, sulfur, and chlorine, less electronegative than their first row congeners, oxygen and fluorine respectively. Certainly the 2nd row atoms have greater nuclear charge than the first row. The answer is that when you descend a row, a full or complete shell of electrons effectively shields the valence (the outermost) electrons from the increased nuclear charge. After all these years I think I can still remember Pauling electronegativities of the halogens: #F, 4.0; Cl, 3.5; Br, 3.0; I, 2.5#; they decrease down a group, but increase across a period (as we would anticipate). These values do have some basis in reality in that Pauling derived them from atomic parameters such as ionization enthalpies of the atoms, but remember they are an #ad# #hoc# scale.