Simon Derricutt archive from Revolution-Green

March 7, 2022:


First draught (additions to 16 Jan 2022 give links to useful sites)

The TL:DR précis of this essay:
If you have a wave with energy, and a functional diode that will rectify it, you've got usable
(unidirectional) energy. The diode is the thing that breaks the symmetry. It really boils down to
breaking the right symmetry, and then you can break the conservation law that is associated with
that symmetry (see Noether’s Theorem). That's really the core of it. For CoM (Conservation of
Momentum), it's the limited speed of light and using a varying field that breaks the symmetry so the
action and reaction are no longer equal and opposite. If they are not equal and opposite, then
momentum is not conserved.

This first section mainly covers the history of why I went down this heretical path. I'd suggest you
skip to the section on CoM or 2LoT, whichever most interests you, and come back to reading this
introduction later if you want to. In summary, though, I was led to question some very basic theory
because of experimental evidence they were wrong. Very few equations in this essay since it's
mostly logic, and no illustrations yet either. If you choose to say this is just wrong, that's of course
your prerogative, but simply stating that "that breaks the Laws of Physics" is not adequate here -
you really need to find a hole in the logic.

This group of sub-essays was the result of spending quite a few years being the main moderator of
Revolution-Green.com, a website that reported on the various Free Energy scams going around and
also ran articles from time to time on the people running honest research in that field. Not that any
of the honest researchers could show something that actually worked, but they largely were either
self-financed or where people donated cash in full knowledge that the ideas were not guaranteed to
work. The scams were in a way more fun and normally got a lot more reader comments, probably
because it was obvious the stuff wouldn't work. The borderline between "honest" and "scam" wasn't
always clear, with some of the inventors personally believing that the devices would work once
they'd built the next (and normally bigger) version, even though the data from their current device
showed that it didn't work in principle.

This is the realm of crackpot science. Mostly it's pretty easy to see whether something will work or
not once you've reduced it to a basic Principle of Operation (PoP). Apply the various conservation
laws, so momentum and energy are conserved, and look for the point in the cycle that is gainful and
where the energy is lost, sum over the complete cycle, and the job is done. May need to check the
thermodynamics, too. Once you know it won't work, look for where the energy source is, which was
often a hidden battery. A lot of the claimed successes turned out to simply be measuring things
badly, either by using a wrong method or by misunderstanding how that measurement worked. Such
is the world of Free Energy - mostly in error.

Over the years, though, a few things remained that actually were unexplained. Crystal Cells had
been dismantled after 10 years of delivering power and the Copper and Zinc (or Aluminium)
electrodes showed no corrosion, which they should have done if the power they produced had been
galvanic (which was the obvious explanation). Of course, you can distrust the person who did the
analysis if you want - after all they are crackpots to even try to make them. Similarly the Lovell
Monotherm continued to deliver power without corroding, and Robert Murray-Smith had both
replicated the original design and tried other formulations that worked too. In order to work without
that galvanic corrosion, these devices would need to violate the 2nd Law of Thermodynamics
(2LoT). Throughout the last century and a half, it's been said that if you think your device will
violate 2LoT, then you're simply wrong. Another device that seems to violate 2LoT was made by
Arthur Manelas, who was a good friend of a friend of mine and was thoroughly honest in his 
experiments. Arthur had built a battery-powered car, and left the device charging it during the week
and drove it each weekend to town to do his shopping. He'd spent years trying to replicate Floyd
Sweet's VTA, and managed to do it and found it worked, and delivered around 18W continuously
whilst maintaining a temperature of the device around 10°C lower than ambient. Given the evidence
from people I trust to tell the truth, the thing actually worked and did violate 2LoT. The question is
exactly how it did that, and I'll put forward a possible explanation later on in the section on 2LoT.
There's also the EMDrive and various replications of that principle that was tested by NASA and
was shown to generate some micronewtons up to millinewtons of thrust. Of course, at that level it's
easy to state that since Conservation of Momentum (CoM) is absolutely true then this must be
experimental error. Maybe a reaction against the Earth's magnetic field or something else in the
laboratory from the current-carrying wires or areas. However, NASA controlled for that so maybe
they didn't catch all the expected confounders. Some replications of Shawyer's device ended up not
showing any such thrust, such as Tajmar's experiments. Thus there are conflicting experimental
results and largely people will choose to believe those that show that CoM remains inviolable. If
however you go back to Newton's original derivation of his 3rd Law of Motion, you'll find that
there were things he couldn't know at the time, such as atomic theory, field theory, and the speed of
light, and when you take those into account we'll see that there is indeed in theory a situation in
which momentum will not be conserved because the speed of light is not infinite but finite. I'll show
this derivation in the CoM section.

If CoM can be violated, then it follows that Conservation of Energy (CoE) can be violated using
that same loophole too, and may have other ways in which it can be violated because it is no longer
absolute.

So far, it seems that we can violate the laws of thermodynamics as well as Newton's Laws of
Motion, and the loopholes will be explained a bit later on. How about the speed of light being
constant? Again that's something I learnt (and accepted as being true) a long time ago and it became
(like Newton's Laws and thermodynamics) one of the bedrocks of my understanding of physics and
how the universe worked. Again, though almost every experimental result confirmed the absolute
nature of the speed of light, a few experimental results seemed a bit odd. For example, the rate of
heat-transfer across a micron-scale gap can be 4 orders of magnitude higher than expected based on
larger gaps. There are experimental measurements of transmission of signals faster than light. Some
are quantum-based ideas (see

https://www.researchgate.net/publication/8393805_Communications_-
_Quantum_teleportation_across_the_Danube ) that we'd expect to be somewhat non-classical, but
through a friend who is an Aether theorist I came across Steffen KĂ¼hn's experiment of sending data
through a standard coax at 3 times the speed of light by splitting the coax into lengths of less than
1/4 wavelength of the signal frequency, leaving it unterminated, and using buffers between sections
of coax. See

https://www.researchgate.net/publication/335677198_Electronic_data_transmission_at_three_times
_the_speed_of_light_and_data_rates_of_2000_bits_per_second_over_long_distances_in_buffer_am
plifier_chains for the paper on that. I can't find an error in the experiment, so it seems something
odd happens in near-field and especially within the 1/4-wavelength region, where signals travel
faster than light. This has implications for quantum physics too, as well as Relativity. Of course,
Relativity itself can be a bit paradoxical, and despite the satellite GPS system being portrayed as a
best test of Relativity, Ron Hatch (the guy who sorted out the maths for the timings and took out the
fundamental patents) actually used Aether theory in the work and showed that the maths doesn't
work unless you use the Earth-Sun frame as your reference frame.

Based on experimental results, some of the fundamental Laws of Physics I learnt as a student seem
to be more of "this is what happens nearly every time", and so if we engineer the right situation we 
can bypass those Laws. We need to check the derivations of those Laws where possible but as
Feynman said if the experiment doesn't match the theoretical prediction then the theory is wrong.
Of course, if the experimental result can be shown to be faulty, then the theory can be rescued at
least for a while, but when we get a result greater than experimental error when theory forbids that
thing to happen at all, I think we should reconsider the theory.

An important theory here comes from Noether, in that a symmetry leads to a conservation law, and
that breaking that symmetry will result in that conservation law also being broken. Thus we need to
consider the symmetries and how we can break them.

********************************************************************************

Conservation of Momentum (CoM):
I'll start with CoM because it's the simplest derivation. We'll start from what Newton saw and
worked out, and then add in what he didn't know at the time. The question here is "why is
momentum conserved?". Once we stop defining it as being conserved, and find out why it is
conserved, we should also find the situations in which conservation doesn't apply.
Consider two billiard-balls colliding. When they are in contact, the springiness of the material will
apply an equal and opposite force to each ball for the duration of the time they are in contact. The
action (force times time) will thus be equal and opposite to the reaction (also force times time), and
thus momentum (mass times velocity) taken from one ball will be exactly added to the other, and
overall momentum is conserved. If you multiply each side of the equation F=ma by time t, you get
force times time equals mass times velocity (interpret this as the delta of mass times velocity).
However, these days we know that the matter in those billiard balls doesn't actually touch, but
instead there's a repulsion produced by the electric fields of the atoms. In fact, the only way a force
can be produced is by a field. There is also necessarily a distance between those atomic and
subatomic particles, and since there is a maximum propagation speed of such forces through a field
(the speed of light) then we will need to take relativity into account.

If we choose our reference frame as the mid-point between those two balls where the total
momentum in the system adds to zero, then we can see that providing the field strength doesn't vary
then the relative velocities and the propagation speed of the force in the field makes no difference,
and momentum is still conserved, and if it is conserved in one inertial frame it will be conserved in
all inertial frames. Thus any mechanical interaction will conserve momentum, as will for example
two permanent magnets in opposition where the distance over which the force operates becomes
much larger and visible to the eye. Also note that with mechanical interactions, the distances over
which the field acts are so short that any deviations from absolute conservation would be
unmeasurable anyway.

However, let's instead look at what happens if the electric/magnetic field is varying quickly relative
to the distance between two objects. To make this easier for now, though there's a caveat here I'll
point out shortly, consider two loops of current with a sine-wave drive to each, where the loop
separation is 1/4 wave and the phase of one loop is 1/4 wave different than the other (loop 2 is
advanced by 90° from loop 1). For loop 1, by the time the wave has propagated from the second
loop it will be in-phase and so the first loop will generate a force towards the second loop. By the
time the wave from the first loop has reached the second loop, it will be 180° out of phase and thus
the second loop will see a repulsion from the first loop. Instead of each loop seeing equal and
opposite forces, each loop sees an equal force in the same direction. This conclusion is not new
physics, and can be deduced from high-school textbook physics. 

This violation of CoM can be generalised to any situation where the field strength is varying, and it
turns out that momentum is only actually conserved when the field that transmits the forces is
constant. In nearly every standard situation, this is of course the case, and the only place you're
likely to see a violation of CoM by this effect is where you're using microwaves and where the
dimensions and arrangement are such that the forces produced do not cancel out.

There is a bit of a problem with the "two loop" idea, though, since there's experimental evidence
that the speed of light may be quite a bit faster in the near field and especially at 1/4 wave and
nearer. Reference Steffen KĂ¼hn's experiment as mentioned in the introduction. Thus if we want to
produce a force by getting the phases of fields and currents right, it's likely a lot easier to engineer if
we use a resonant cavity with a high Q (quality factor) which will result in the field strength being a
lot higher as well as using the far-field equations which are pretty exact. I've tested the simple two
loop idea at 5.8GHz and couldn't measure any thrust, though I'm not certain that the calculated
thrust would have been enough to measure anyway (less than a tenth of a micronewton).

Though the schoolkid-level analysis of the two loops idea fails because this assumes the speed of
light is constant even in near-field, you could separate the loops by 10.25 wavelengths instead and
get the same logical result though the force would be very much smaller. The conclusion from this
is that momentum is not actually a conserved quantity, even though in the majority of situations it
will be conserved.

Also worth looking at Richard Banduric's Electric Spacecraft idea at
https://electricspacecraft.org/index.html . He found a velocity-related term in Farday's Laws that is
normally treated as being zero, and realised that this could produce a broken symmetry between the
forces experienced by moving electrons and the field they produce. I'd trust his measurements, since
I had enough emails to be confident he's very competent. The "reactionless" force produced there
was around 100mN for a few hundred watts input, which is a lot more efficient than the EMDrive.
On the other hand,

http://physicsfromtheedge.blogspot.com/2021/02/horizon-engineers.html may produce of the order
of 10N/kW instead once it's been developed. When it comes down to deciding what is the maximum
thrust you'll get for a kW input, that is more of a matter of choosing the right method - they aren't
really proportional and all the energy put in is in fact wasted one way or another, with the force
produced being a matter of how well you implement the idea

Given any system where a constant force (relative to the device) is produced for a constant power
use, and where this device is allowed to accelerate in free space, there will be a point where more
kinetic energy is being produced per second than is used to power the device per second. The
kinetic energy produced is the force multiplied by the distance travelled (in the original frame in
which the device starts at rest), and of course the distance travelled per second will increase as the
velocity increases. Thus, given that momentum is not a conserved quantity, neither is energy.
The symmetry that needs to be broken for CoM is that the action and reaction are equal and
opposite. Whereas this will not be broken when the fields used are constant, it is obvious that they
can be broken when a changing field is used, simply because the propagation-rate of the disturbance
in the field is limited. Mike McCulloch suggests an alternative way of breaking the symmetry by
creating an artificial horizon.

A related question is “why is energy conserved?”. So far, we’ve found that it is, and this was the
reason that the neutrino was proposed to exist long before it was actually detected. Possibly the
conservation is because the interactions between fields and between fields and particles have an
underlying symmetry. Still, since we’ve shown a way to break that symmetry for CoM, there may 
be other symmetry-breaks with energy interactions too. Of course, if you go back to the Big Bang
theory, then at that point there was a pretty severe violation of CoE in that all the energy in the
universe was suddenly produced.

*****

There is a mathematical derivation of this predicted violation of CoM by Asher Yahalom of Ariel
University at https://www.researchgate.net/publication/357604001_Electric_Relativistic_Engine
and

https://www.researchgate.net/publication/341878074_Energy_Conservation_in_A_Relativistic_Eng
ine , and he’s also produced a few other papers on this subject. Thus it’s not just me who has seen
this loophole in this basic axiom. However, the belief that momentum is a conserved quantity is
very strong, and so few people will even consider that it may be wrong. Even Yashalom is careful to
show that the momentum is overall conserved by that momentum going into the electric and
magnetic fields. However, can we measure any difference between an electric or magnetic field
with momentum and one that does not contain such momentum? The equations will be the same,
and as far as I can see we can measure no difference either. However, Yashalom calculates that for a
perfect system (that is, not resistance losses) the work (that is energy) that goes into the electric and
magnetic fields is 6 times that that goes into acceleration of the device. This of course also implies
that the faster the device goes, the force produced for the same input power will reduce such that the
rate of increase of kinetic energy does not exceed the energy you put in (in fact it will be 1/7 of the
energy put in since 1/7 goes into kinetic energy, 2/7 into the electric field, and 4/7 into the magnetic
field. Note that Yashalom says that the amounts of energy for electric and magnetic field are 2/6 and
4/6 respectively. It is obvious that this is paradoxical, since it is dependent upon which inertial
frame you use to measure the velocity at any point, with the kinetic energy dependent on the
velocity squared. As I noted, though, it’s a big jump to suggest that momentum and energy are not
conserved quantities, and in general people will accept a paradox here rather than state that the
axioms might be wrong. Here, the force generated should be produced in the frame of reference of
the two loops, and thus independent of the velocity, and if it is dependent on velocity then that
would be proof of an Aether with a preferred reference frame. One way or another, the two current
loops and the “unbalanced” force they must produce is going to break some law or axiom.

********************************************************************************
2nd Law of Thermodynamics (2LoT):

This bit of work was done while I still thought that energy and momentum were absolutely
conserved. Since for the things we're dealing with here, that will apply anyway, no need to take the
exceptions into account.

There are some concepts baked-in to thermodynamics that derive from Sadi Carnot's perception of
heat as being a fluid ("Caloric") that would only flow from a hotter object to a colder one. There's
also that concept of "thermodynamic efficiency" which, because in real life we don't have a coldsink at zero degrees absolute, can never reach 100%. A numerical example may illustrate this. Let's
say we have a perfect Carnot engine filled with a perfect gas that has a constant heat-capacity over
temperature, and we're running with a cold sink of 100K and a hot sink of 200K. At 100K, the gas
has a total energy of Q joules. To raise it to 200K we thus need to add another Q joules, and the gas
now has a total energy of 2Q joules. Running the Carnot engine between 200K and 100K, the
Carnot efficiency will be 50%, and so we get Q joules out of the engine. We put Q joules in as heat,
we get Q joules out as mechanical energy (so 100% of the heat energy we put in comes out as
mechanical work), and the Carnot efficiency was only 50%. 

In reality, of course, the efficiency of conversion when we look at the joules in and out is only 100%
if we have that perfect Carnot engine and perfect gas, and the Carnot (thermodynamic) efficiency of
50% is only really a restatement of Conservation of Energy (which is why it cannot be exceeded
using a classical heat engine). Also in reality there will not only be losses through imperfect
insulation, not enough time allowed to reach thermal equilibrium, drag and frictional losses, and
temperature cycling of the piston and cylinder, but also with a real gas the heat capacity will vary
with temperature too, so in our example above the amount of energy in the gas at 200K will not be
exactly twice that at 100K anyway. Thus with real materials even the Carnot efficiency will not be
accurate, even though Conservation of Energy still applies and you won't get out more energy than
you've put in.

I spent quite a long time nitpicking this process because I had a hunch there was something wrong
with the concept but couldn't quite see it. It took me around 40 years to see the solution, because it
is hidden in the language used to describe the process. Everyone seems to regard the
thermodynamic efficiency as being equivalent to the actual conversion rate of fuel energy to output
mechanical energy, and to ignore the point that in the real world we don't start with the cold-side
working fluid at zero absolute but at the ambient temperature.

There are some other failures in the classical deductions and even in the definitions of the words
and concepts used. The main one is that work is regarded as being a scalar, when it is actually a
vector - force times distance gives you the work, but that distance must be in the direction that the
force is acting and you must know the directions of both the force and the movement to correctly
calculate the (scalar) work done. Work is in fact a vector quantity. Kinetic energy is regarded as a
scalar, too, but it can only be measured when it is carried by some particle and that particle must be
moving and therefore has a direction. You can't have (scalar) kinetic energy without a momentum
vector. Heat itself is regarded as a scalar, too (we simply talk about the number of joules and ignore
directionality). Of course, when you're dealing with calculations for steam engines (what
thermodynamics was really invented for) the maths works when you deal with the energy and heat
as scalars. Because this works well, and always gives the right answers when dealing with heat
engines, it is regarded as being well-tested and correct. Thus the prediction that you can only get
usable energy when working between two temperatures is regarded as being absolutely true.
It's true as long as you have ignored the directionality of the particles that carry that heat (so it isn't
actually true). When you add that directionality back into the logic, though, more things are
obviously possible. What is the difference between a joule of heat and a joule of usable energy? It is
simply that with the heat, the (many) particles carrying that joule are in random directions and thus
cannot move an object in a particular direction, and thus do (vector) work. If all those particles were
in fact going in the same direction, you'd have a wind which can obviously do work. When we use
energy, and thermodynamics counts the energy "lost", it's not in fact energy that's lost but instead
the directionality of the particles that becomes more random.

Thus the "lost energy" that is discussed in thermodynamics is not lost, and in fact after you have
done some work you have exactly the same amount of energy afterwards as you started with,
though likely in different locations/stores. All that is lost is the ability to do work with that energy
because all your initial energy is no longer acting in the same direction - the net momentum vector
has changed.

Usable energy (that is, energy you can do work with) is where the kinetic energy is all in the same
direction, though in general we don't worry about which direction because that can be changed quite
easily. Heat energy is where that total energy has a momentum vector summing to zero, though each
individual particle will still have its own momentum vector.

How to reverse this loss of directionality? What's changed here is the momentum vectors, and so
what we're looking for is a momentum exchange that changes those random momentum vectors into
vectors that are all aligned (or more-aligned than they were if you're only recovering some of the
energy). If you look at the definition of a conservative field (electrical, magnetic, gravity, or
nuclear), such a field maintains the sum of potential and kinetic energy of the affected particle, yet
changes the momentum vector in the direction of the field, so it does what we want. Generally, we
ignore the philosophical implications of this ordering effect and only recognise the tendency of
collections of entities to become disordered. Without the ordering effect of such fields, though, we
wouldn't exist. Bit of yin and yang here - without both the ordering and disordering effects life
would be very much different if it could exist at all.

Still, let's say you are carrying that heat energy (random momentum vectors) on a collection of
electrons and you subject those electrons to an electric field. Pretty obviously, those electron vectors
will go upfield towards the +ve pole of what is generating the field. If you look at how a solar panel
works, this is indeed what is happening here. The field is produced by the PN junction, which
generates a depletion zone for some distance either side of the junction itself where carriers are
swept out of the depletion zone. Photoelectrons produced within this depletion zone are swept out
of the zone before they have time to recombine and generate a photon again. Looked at another
way, random-direction photons coming into that zone produce random-direction photoelectrons
which are then given a direction by the inbuilt electric field and end up on one electrode (holes go
the other direction). This ordering effect happens without any extra energy needing to be supplied to
the PV, and delivers ordered energy (that is, unidirectional momenta) from disordered energy (the
incoming photons) because of the field structure produced by the arrangement of P and N
semiconductors. This spontaneous production of order is one of the things that thermodynamics
says can't happen, and yet we know that solar panels do actually work. People are in general so sure
that the laws of thermodynamics are inviolable that they find ways to prove that a PV doesn't
violate them even though it's pretty obvious that it spontaneously produces more order and thus
violates them.

One thing to point out here is that simply creating a field around a collection of random-direction
electrons will not deliver more work than is needed to set up the field. The essential point of the
gain of order in the PV is that a net-zero charge atom that is already in an existent field becomes
ionised by the incoming random energy, thus the emitted photoelectron already has the potential
energy that is due to that field. Basically, here we're changing an entity that is not affected by the
field (a neutral atom) into a photoelectron and a positive ion. The photoelectron is then accelerated
and given directionality by the field. The positive ion will capture an electron from another neutral
atom near it and downfield (because the presence of the field make that far more likely than
capturing the electron from an atom upfield), thus the "hole" moves downfield.

The difference between the near-IR photons that a solar panel converts to usable energy and the
long-wave-IR photons associated with room-temperature IR radiation is solely the quantity of
energy each photon carries. Thus logically we should be able to use the same principle to convert
room-temperature radiation to usable energy. Practically, I haven't managed that yet, but that's more
of a technical problem (it's a bit hard to build a semiconductor fab on the kitchen table, and to get or
produce pure-enough materials) than a problem in theory. The commercially-available PVs go down
to a band-gap of around 100meV (a Mercury-Cadmium alloy doped with Tellurium - search on
MerCaT sensor), but those are somewhat expensive since they are hard to make and have the
consistency of a banana. There are some other alloys that would give us a direct band-gap of around
24-30mev which would be compatible with room-temperature radiation. I'm not yet sure what
dopants would be required for these. There are people who can produce Graphene with a specific
band-gap in this range, though (basically, by adjusting the number and location of holes in the 
lattice), so it's possible that someone will make such a device using Graphene. Not a kitchen-table
job, though.

One of the silly questions I asked a while back was "why does heat only go from hotter to colder".
That's something I doubt that many people have thought about, given that we can feel that with our
fingers and it seems pretty obvious that it does happen. It's also part of thermodynamics that that
happens every time, and the 2LoT tells you that heat doesn't go the other way spontaneously, and
you have to do work to make it happen.

However, when you consider what heat is, it's a collection of particles of one sort or another
(photons are particles too, for some definitions of particle being a localised collection of energy)
with a random set of momentum vectors such that the total momentum is zero (if it's non-zero we
have a wind that can be separately counted in the energy sum). For each particle, it will follow a
random walk as it collides with other particles. I'm assuming that no particle can predict its future,
but instead only reacts to what is happening here and now to the particle. That means that each
particle may be heading towards a hotter or cooler location in the collection, and in fact the
temperature of the destination location can make no difference.

What is actually happening with heat transfer is a random process, which is independent of the
temperature-distribution of the medium. Over time, the energy will tend towards an even
probability distribution over the volume it can spread to. What we experience (heat passing from
hotter to cooler) is in fact an emergent property of the collection of particles because of the way we
consider heat to be some sort of fluid.

There's also some problem with the way we think of a thermal equilibrium, where we say that heat
is no longer being transferred. In fact, the particles are swapping energy just as quickly as before the
equilibrium became established, but *on average* we now have just as much energy passing in one
way as the opposite. In fact, if you look at the Boltzmann curve, in any sufficiently-large collection
of particles (consider a fluid, since for solids we have the Debye limit) where we measure the
temperature, some of those particles will have a much higher kinetic energy than the average, and
some will have a much-lower energy, and the rate of transfer of energy is pretty high in daily life
and you need lab kit to reduce that rate of transfer of energy down to below some threshold for
certain experiments.

The average rate of energy-transfer fairly obviously depends on the temperature, and is equivalent
to what something of that temperature would conduct to an object at zero absolute. This logically
follows from the point that the particle cannot predict the temperature of the location it will next
collide with another particle. That's quite a rate when it's often taken as being zero because the
system is in thermal equilibrium.

Much the same thing happens in a radiative thermal equilibrium. Each object radiates energy
according to its own temperature. If it wasn't at the same time receiving radiation from its
environment then it would naturally cool down to absolute zero. Again, this follows from the
proposition that the future is not predictable, and that some of the photons we see may have been
travelling for billions of years, thus cannot know the temperature of the object where they are
finally absorbed. An interesting point that arises from that observation is that if you could achieve a
device that only allowed EM radiation in one direction, you could produce a "cold sink" for a heat
engine and achieve a system that produced work from environmental energy. There have been
attempts to produce such a "light diode", but they only so far work for a narrow band of
frequencies. 

OK, that's the theory side of how to violate 2LoT, so what are the current practical examples?
Professor Dan Sheehan claimed success using conflicting equilibria but the system needed to run at
a fairly high temperature. I expect his data was correct, but the conversion efficiency appears to be
somewhat low and it was an expensive device. The Lovell Monotherm (look it up) is probably still
for sale, and I don't doubt their published figures, but the output at room temperature is minimal and
it really needs a temperature of around 50°C or greater to get a reasonable amount of power. It has
been replicated by Robert Murray-Smith, and I trust his data anyway, but again his replication and
the extended versions he made do require higher than room-temperature to work well, and a 5"x4"
device produced around 7 microwatts at room temperature. I had some discussions with Rich C.
from Connecticut who had something based on a cheap material and asked me why it worked (he
was getting around 50mW per kg last I heard, and had 3rd-party verification of that that he sent
me). Not quite commercial at that output, but he wouldn't tell me precisely what it was made of so I
couldn't tell him how it worked. However, I don't doubt his figures or his honesty, and his discovery
was accidental since he noticed the effect when working on some other use for the material.
Professor Fu produced a few femtowatts with his device that used photoemitters in a vacuum where
the photoelectrons were selected by a magnetic field which allowed gyration in only one direction
and thus broke the symmetry. MIT and others have produced nantenna arrays that deliver a few W/
m² at room temperature (but more if you illuminate them with a heater or laser). Arthur Manelas'
device breaks the mould here, by producing around 18W which is actually somewhat useful, but at
this point in time has not been replicated. I think I know why it worked, by producing spin-waves in
one rotational sense only rather than the natural 50-50 split left and right rotation (so this is the
symmetry that is broken by controlling the bias and rotating magnetic fields), but the amount of
power these types of device will produce will be limited by the rate of heat-transfer through the
Ferrite material, so I doubt if even this will be that useful.

A more in-depth analysis of Arthur’s device require knowledge of spin-waves as well as
thermodynamic Degrees of Freedom (DoF). Spin-waves are basically the precessions of the
magnetic vectors of atoms/molecules when they are in a magnetic field. The energy of the spin in a
particular material depends on the frequency of the spin, which in turn depends on the net magnetic
field strength. The quantum of energy needed for this spin is thus dependent on the magnetic field.
Since they are quantised, if the magnetic field changes then the spin-wave stops and another starts,
and the direction of spin is random – there’s a choice of only two directions and normally you’ll get
equal numbers of each on average. Since this is an energy store, each direction of rotation possible
adds another thermodynamic DoF, and each DoF can store an average maximum of 0.5kT where k
is Boltzmann’s constant and T is absolute temperature. You normally see this quantum effect in use
in magnetic refrigeration cycles, where increasing the magnetic field increases the effective number
of DoFs in the magnetisable material and thus shares the available thermal energy between more
DoFs, thus reducing the measured temperature – measured temperature is equivalent to the energy
in one of the translational DoFs in the solid. Reducing the magnetic field reduces the energy that
can be stored in spin-waves, and so the temperature rises. Like other quantised DoFs such as
rotation of molecules, the spin-wave DoFs can be partially filled depending on the temperature and
the thermal capacity of the material depends on the number of DoFs filled. Cutting a long story
short, since the basics can be easily looked up, this gives rise to the Magnetocaloric Effect, and we
can use that to produce a refrigeration cycle. However, it turns out that the direction in which a
spin-wave starts up can be biased by adding a rotating magnetic field that is orthogonal to the bias
field that sets the rotation rate. Thus if you regularly vary the bias magnetic field by a small amount
that is enough to ensure that spin-waves are being continually created and destroyed, whilst adding
in the rotating magnetic field of the right frequency orthogonal to the bias field, you’ll get new spinwaves only in one direction. The fields from the individual spin-waves can now be sensed outside
the block since they are synchronised, and so you can take energy out of them using a coil. New
spin-waves will be produced using the thermal energy in the material and the normal division of
energy between DoFs, and you get that energy out from the coils that produce the rotating magnetic 
field. It seems Arthur ran at a frequency of around 160KHz. Getting the bias field exactly right for
the desired frequency will likely need a bit of trial and error. It does seem likely that the rotational
field, once kicked off, will largely keep itself running, so you probably only need short pulses of
rotating field at intervals, followed by a period of harvesting the spin-wave energy using the same
coils. So far I haven’t had the time to test this explanation by building something that works. That’s
largely because I don’t see this as providing more than a few tens of watts, so it’s not really going to
be that useful or cost-effective. Though Arthur used Sweet’s method of producing the laminar
magnetic field structure in the ferrite blocks, it’s probably a lot better to build the main block from
magnetised laminations where opposing poles are separated by a material with a good
magnetocaloric effect. If you supply the main bias field using a permanent magnet, the bias
magnetic field variations can be supplied by AC through a coil, and thus little power is needed to
maintain the supply of new spin-waves.

Though the MerCaT sensor is indeed commercially available (at around $3k for a 1mm² die), it only
produces a few microwatts if that, so not really practical as a power source.
There are enough examples around to show that 2LoT can be violated, but the common problem at
the moment is that the actual amount of power produced is minimal and not actually useful.
The commonality in the way they work is that they break a symmetry in some way. Since in the
standard situation, energy-flows are symmetrical in an equilibrium situation, and what we need to
turn heat energy into usable energy is to change the direction of the momentum vector, they'll all
need some sort of "diode" to perform that function of unbalancing the flow in one direction as
opposed to the other.

Conversely, anything that affects momentum in one direction differently to another (or all other)
direction could well be used to produce something that violates 2LoT. The larger the asymmetry it
produces, the more useful it will be and the more power you'll be able to harvest from a thermal
equilibrium - bear in mind that the energy flows in an equilibrium are high, even though the net
flows will be zero.

The symmetry being broken here is that energy flows are normally equally impeded in opposite
directions. Once you can break that symmetry, the related conservation law will also not apply. The
main barrier against seeing the logic here is that we are taught that kinetic energy is purely a scalar
quantity, whereas in practice we can only detect it when it is carried by some sort of particle that
will have a momentum vector. Thus we also see heat as being a scalar quantity, and thus
directionality simply cannot apply apart from the idea that heat only flows from hotter to colder.

********************************************************************************

OK, that's shown the loopholes in CoM, CoE, and 2LoT, so where else does this heretical thinking
lead? Whereas with CoM and 2LoT the logic looks sound to me, in this section things get pretty
speculative and so this is put here as ideas to discuss.

I've been reading Mike McCulloch's ideas for a few years now. If you haven't yet seen them, his
blog is at http://physicsfromtheedge.blogspot.com/ . As far as I can see there remain some
paradoxes here, in that the Unruh waves seem to transfer information instantaneously over universesize distances, and if you have an infinite propagation velocity then you cannot have a wave or a
wavelength. However, the maths actually works despite that, and it explains a lot of experimental
anomalies without needing to invent Dark Matter or Dark Energy (which despite the effort
expended we still can't actually detect). I'd thus suggest that Mike's equations have a lot of truth in 
them even if the backing description may not be fully-formed yet. So far, the predictions of thrust
and the experimental measurements match pretty well.

Then again, the standard descriptions of reality using wave equations have a few problems too. In
order to support a wave, you need to have analogues of inertia and springiness which implies a
medium (Aether) of some sort, whether you call it space-time or something else. However, when
you get down to producing a model of how the field supports a wave it so far seems to always end
in needing some "stuff" you can't see and measure that "naturally" has those functions of inertia and
springiness. This recursiveness is a problem with all the theories I've seen, even down to String
Theory. As such, we don't actually have a workable model of what a field is or how it works. We
might as well define a field as being something that can support a wave and can exert a force and
leave it at that, though there remains a problem of how we define a velocity relative to that field
(and whether that velocity makes any difference to the effects). This last difficulty is normally handwaved away in Relativity by saying it doesn't apply, since space-time is not a material substance
and thus the concepts do not apply anyway. However, experimentally we have only measured what
happens at high relativistic velocities relative to the lab frame, and haven't measured from one
relativistic frame to another. In reality, we don't know whether all the predictions will turn out to be
correct when we take things to more extreme conditions than we have currently tested.

The standard QM view of a particle is that is is a wave-function that is high-amplitude at the core
and that the probability of finding the particle drops outside the core, reaching zero only at infinity.
This wavefunction is not physical but mathematical in nature, and defines the probability of finding
the complete particle at that point. The implication is the wavefunction is built from a continuum of
wavelengths. Mike's picture however implies that the wavefunction will be zero at the horizon
(Hubble radius) for a non-accelerated particle, and that the location of this horizon will vary with
the acceleration of the particle. It thus also implies that rather than a continuum of wavelengths
being available to build the wavefunction, there will only be specific wavelengths that have a node
at the horizon. The wavefunction must itself be built of quantised frequencies, and not a continuum.
What happens if we propose that instead of being a probability of finding the particle at that point,
we say that this probability function is in fact the matter-density at that point? We need to propose
that anything that affects any part of a particle will affect the whole of it, since otherwise the
production of inertia won't work. We also propose that time runs slower the greater the matterdensity is, which does look to be fairly reasonable. Where the matter-waves overlap, the time would
then run even slower since the matter-wave density is higher, and because time runs slower the
energy in that volume is thus lower. In every case we see where there's a gradient in potential
energy, things have a tendency to move to reduce the potential energy (which increases the kinetic
energy as they start to move), and thus we'd end up with what looks like a force and can be
identified with gravity. That is, if we treat particles as being fuzzy and extending a long way from
their centre point, and the density of the matter-wave determines the rate of time, we produce
gravity. Gravity doesn't slow time as Einstein stated, but instead it is the slowing of time that
produces gravity. We should be able to determine this experimentally, since at the Lagrange point
L1 gravity is actually zero, so Einstein would predict that time runs at its fastest. If time is however
slowed by the density of the matter-wave, then at the L1 point time will be running more slowly
than at another point at equal distance from the Earth but along the Earth's orbit instead. With
atomic clocks in a satellite, this could be verified. Thus gravity itself is not really a field as such, but
a result of the distortion of the time-rate by the presence of matter. You could also measure the rate
of time down a hole in the Earth, since again at the core of the Earth Einstein would predict time
running faster, yet if it's a result of matter-density then time would run more slowly. In fact,
measurements so far have shown that time slows the deeper in the hole you go, but it is suggested
that this is because the density near the surface is less than that lower down, and we've so far not
gone more than a few km down anyway. I figure that the "not deep enough yet" explanation is a bit 
of a hand-wave and that better measurements would show that it continues to slow the deeper you
go.

This idea of the density of the matter-wave slowing time also might explain another problem. If
particles are indeed made of waves in space-time (or some other medium, then why do they bounce
off each other rather than simply pass through each other as a wave should? OK, I haven't seen
anyone else really mentioning this problem or even considering it as a problem. We just define a
particle as being something that will bounce off another particle and (when we look at two-slit
experiments) also has wave properties.

If we plot the rate of slowing of time around a particle, we'd simply get a plot of a function of the
gravitational attraction. However, that slowing of time will refract a wave too (seen as Einstein
lensing on a cosmological scale) and so we might expect such a particle to be deflected too by such
a time-gradient. Thus two such particles would each refract the waves of the other, and would
change the locus of movement of each other. We see that light is reflected by a large-enough change
of refractive index, so it's not a stretch to consider that the matter-wave may be thus reflected by a
sharp time-gradient (and since the time-gradient will get high close to the "particle" and time should
really stop at the Schwartzschild limit, that's going to reflect pretty well, being in effect an infinite
refractive index).

The problem here is that a positron will self-annihilate with an electron, producing two 511keV
gammas moving in opposite directions, and it is logically possible for the opposite to happen as
well though in practice we use a 1022keV gamma and a nucleus to produce that electron and
positron pair. Thus the "particle-wave-like" and "pure-wave-like" parts run a sort of mix and match
together with the property of charge. So far the only way I'm seeing this working is that whereas the
travelling wave (in this case a gamma) will go in a straight line (or geodesic, to be more precise),
maybe the particle is a spherical wave resonating in its own resonant cavity with the boundary of
the cavity being the current Hubble radius. It doesn't of course explain why that horizon is produced
and why it reflects the spherical wave. This won't explain charge, and why like charges repel and
opposite charges attract, though we could propose something based on phase of the resonance (but
then that also needs some universal phase reference and that doesn't seem reasonable). I'm
somewhat unwilling to propose more than the 3 physical dimensions we can actually see and
measure. Though it can be useful to treat time as being a dimension, it really isn't the same as the
spatial dimensions because it only goes in one direction - you cannot go backwards in time. Though
many people have used maths that implies the possibility of time running in reverse, sometimes
without realising that that is implied, I figure that this is not possible physically. If it was at all
possible, then the whole universe would be totally predictable.

As a visualisation, I can see the resonant spherical wave made up of a quantised series of spherical
waves almost working. There could also be for each wavelength a 1/4-wave effect, since we've seen
that below 1/4 wavelength there are odd things happening for EM waves. The higher the total wavedensity, the slower things happen in that volume, thus time as we measure it would slow.
In trying to produce a model we can understand, we can only really use things we have seen and
thus understand (and have words for). Thus we see a wave on the ocean to get the concept of a
wave, and we've generalised that to compression waves in air (sound) and torsion waves in a jelly.
Much the same when we talk about fields, where we can visualise a vector or scalar quantity at each
3D-point in a volume and compare to what's happening in a stressed bit of jelly. We need an
analogue in order to be able to even discuss things with other people and be reasonably sure that
they get the same meaning from the same words. It's quite possible that what's really happening in
the foundations of physics doesn't have such an analogue. Maybe the mathematical description
could be the only one that works, but there can be a problem there with the maths suggesting that 
something happens that cannot physically happen, or alternatively the maths says something can't
happen but in fact it does. Maybe, as with CoM, there's an axiom that's wrong, or as with 2LoT
some of the word definitions are unnecessarily limited and thus limit the range of what is calculated
to be possible.

Producing a model that will properly replicate the effects of charge is difficult. As noted earlier, an
Aether model ends up being recursive, in that in order to explain the effects we can see we need to
propose stuff we can't see (because the scale is so much smaller, or it's in a different dimension,
etc.) that has the properties we're trying to explain. Though once I'd got over the teaching that
Aether was an outdated concept and looked at the results and found a good agreement with reality,
the underlying explanation is unsatisfactory because it's recursive. For example, using the equations
of fluid dynamics in order to explain inertia and springiness fails because the equations of fluid
dynamics incorporate inertia and that the particles bounce off each other (springiness) and so
overall you're not really explaining things at all. It just looks like you are, providing you don't dig
any deeper. If you're using the effects of vortices on each other, and the attraction if they're rotating
in the same direction and repulsion if they are opposite, then there's a bipolar effect that we don't
see in reality, in that a charge of one polarity will attract or repel a charge of different or the same
polarity equally in all directions simultaneously, and you can't do that with a vortex. Flip that vortex
180° and it will flip between attraction and repulsion of another vortex. Only works if you propose
that the spinning is in some 4th or greater dimension and that in that dimension there are only two
possible directions. Though I can't say that's impossible, it's still somewhat of a stretch to swallow.
Based on geometry alone, I can't see a way of producing a unipolar field (such as that produced by
an electron with spherical symmetry) from anything that is fundamentally bipolar (a vortex or a
magnetic field). You'd need one or more extra dimensions that we can't otherwise detect or measure
in order to make that work, so using Ockham's Razor tells us that that is pretty unlikely.

Net result here is that I can't produce a reasonable model of what a field is, how it produces a force,
or even what that force actually does to a particle. An interesting point here is that when we apply a
force to an object, it will push back with an equal and opposite force while it accelerates. If we set
our frame of reference as being that object, what we see is that the object simply produces a
counterforce and otherwise (in its own reference-frame) nothing appears to happen. If we look from
an inertial frame we of course see the object accelerate, and (using Mike McCulloch's ideas) we'd
see the horizons of that object change. There's maybe a question here as to whether the object itself
sees the horizons change in response the the applied force, and that's a question I'm still
considering. In one sense this provides a reason for inertia, in that the object in its own frame of
reference is not affected by a force and thus when no force is acting then there will be no change of
velocity as measured by the external inertial frame, but it's pretty obvious that when a force is
applied then *something* happens to the object. I have a hunch that the application of a force might
affect the time-rate, and thus the resonance conditions of the spherical wave within its horizons, and
making the centre of the resonance move, but so far that idea hasn't really gelled into something that
can be quantified. It does seem that the force would need to act over the whole volume of the
particle, which itself stretches to the Hubble radius, thus the effect on our localised particle would
need to be extremely non-localised.

Despite being enshrined in various theories, there's actually no evidence of there being more than 3
physical dimensions. If there were more, then we'd really expect to have evidence of things rotating
about some axis and changing the size we can see in our 3-dimensional world. Since we define the
dimensions such that each dimension is orthogonal to the others, so a movement along one axis
alone cannot be detected along the other 2 axes but a rotation about an axis will affect what you see
and measure along the other 2 axes, and in the world and cosmos we see there is always rotation of 
some sort, if there were other dimensions we should see rotations around them too, with visible
effects on the axes we experience. That hasn't been noticed.

Quantum theory proposes various fields, where energy can be transferred between them. See Matt
Strassler for some good explanations of this. Top of the blog is https://profmattstrassler.com/ but try
looking at

https://profmattstrassler.com/articles-and-posts/particle-physics-basics/virtual-particles-what-arethey/ for some stuff about quantum fields. He's the sort of teacher I wish I'd had. The interesting
point I'd like to make here is that if you have 3 dimensions and multiple interpenetrating fields in
those 3 dimensions, then the maths can deal with that by having more physical dimensions. That
doesn't mean that the dimensions are really there, but because the fields are largely orthogonal (that
is, they largely don't affect each other but some do) then the extra (imaginary) dimensions makes
the maths give largely the right answers. I'm not totally certain that there are in fact different fields,
and suspect that they may instead be various aspects of a single field, but physics is really about
whether the predictions work rather than whether the concepts themselves are reality. You need a
degree of doublethink here, in that you treat things as true because they give the right answers but
you shouldn't believe that they are actually the exact truth. When we come up against experimental
anomalies that our current theories can't explain, being stuck in the belief that the theories are truth
and that the experimental anomalies are thus experimental error would just means that we get stuck
and unable to progress and explain those anomalies. There could be some extremely-useful effect
we're missing because we simply don't believe it happens.

At the moment you can find mainstream scientists arguing against the EMDrive (and other
"reactionless" drives) being "theoretically impossible" and thus all down to experimental error.
Much the same for LENR, where there are some otherwise-unexplainable events (the severe
meltdown of Pons and Fleischmann, and the Thermacore meltdown) and some meticulous
experiments (Miles' heat/Helium correlation). With Shawyer's EMDrive, if you don't get the shape
right and don't get the resonance, it's not going to work, so replications (unless totally exact or
correct by chance) may not work. For LENR, the precise conditions under which it happens are not
yet known, so again replications (even by the same person at the same time) may not work – again
see Miles’ experiment, where the “non-working” cells, though designed and intended to be exactly
the same as the working ones, were used as controls during the experiment. For once, Einstein’s
quip about doing the same thing and expecting a different result doesn’t apply, since in LENR those
different results are a feature of the field. Mitch Swartz’s NANOR devices varied in how much
power they produced, too, and following the same procedure using materials from the same batch
doesn’t even guarantee that all devices will work the same, or indeed work at all. For both
anomalies, I think there's enough evidence that they really work (sometimes) that there is real
physics there. It's thus worth experimenting with, and I know a few people who are doing that.
Another anomaly I came across during my time at Revolution-Green was the Papp Noble Gas
engine. Initially, I considered that as simply a scam (as most people would do, reading the stories),
but I came to know Bob Rohner who actually built (and did the mechanical design for) the last
working engine for Papp. Bob is actually a very good mechanical engineer (though didn't know a
lot of nuclear physics) and very honest. He couldn't have been fooled by a non-working engine.
Papp, however, did lie in the patent application, and also in his account of travelling in the
submarine that he built. I'm pretty certain that the submarine itself existed, but I'm also pretty
certain it sank when he tested it. It might even turn up someday, maybe not far from its launch
location. Papp also lied, IMHO, in stating that his engine fused Helium. As far as I can see things, it
was triggered fission that supplied the energy, and the composition of the gas was not as he stated to
Bob or as patented. While Bob is still trying to get things to work, there's a limit to what I can
publish about what Bob told me or what I think about how it worked. I should point out that the
"gas mixer" instructions stated that the "white deposit" was supposed to be excess noble gas that 
was burnt out, and that this is an obvious impossibility. There are other anomalies in the technical
descriptions, and you need to go through the patents with a negative space analysis - the stuff that's
critical isn't mentioned, so by seeing what's not talked about you can find out what's important. Also
to be noted is that gas suppliers state that stratification of a mixture of noble gases in a bottle would
be so small as to be undetectable, so you wouldn't need to roll the cylinder around to keep it mixed
(but you would need to do that if there was a finely-divided solid suspended in it). Bob might
succeed in getting something working, since he's still spending time in the workshop and testing
things. Such research can be a bit dangerous if you succeed, since the engine does produce quite a
high neutron flux if it works (and both Papp himself and Bob's brother Tom died of cancer). Also
worth noting is that Papp released all the mixed gas in the working engine and bottles a week before
he died (possibly so that the secret died with him) and that the only time his engines worked was
when he'd filled them with gas he'd mixed - what was in the gas was essential to operation, and he
never told anyone else that secret or wrote it down. Of course, if someone finds a good way to
violate CoM, we'll be able to generate energy very cheaply and without pollution, so there won't
really be a need for nuclear power. If that takes too long, though, having a nuclear power-source
that runs on unpurified Uranium ore and is of the order of 70-100kW could prove to be useful. I'd
be happy if I knew how Papp produced the triggered neutron source that caused the fission to
happen, since that would make production of a compact fission system possible even without the
noble gases and mechanical output.

During the course of looking at odd claims and digging down into them, I came across the work of
Fred Alzofon. Not normally something I'd have dug too deep into, given that a lot seemed to depend
on "crashed UFOs" and stories of alien visitations. However, Alzofon's theory is an extension of
Einstein's Special Theory of Relativity, where as well as using an uncertainty in location of objects
Alzofon added an uncertainty in the time of emission of a light signal. Hard to understand the whole
theory for me, since he uses Octonions (8-dimensional maths) rather that Quaternions (4
dimensions). However, it seems that most of the other physics equations drop out of the main
equation when you approximate some parts to zero - I'll need to take his word on that since I am not
competent on the maths involved. Alzofon was however a bona-fide genius, and that's pretty
obvious from the rest of his work. His Universal Field Theory was his life's work, and it wasn't
accepted, maybe largely because of the UFO connection and the implication of tinfoil hats. Of
course, there are problems with his experiment, which is why he didn't publish it, and it's hard to
prove that a "weight loss" is in fact a reduction in gravity or instead just an unexpected force. If
you’re working with microwaves in a resonant cavity, and a sample that’s of the order of half a
wavelength long, producing an “unexpected” force really ought to be expected. For that reason I’ve
been working on trying to show a reduction of inertia using a modification of Alzofon’s technique,
which would be a proof of the theory and also can be measured very simply by the pitch of a
tuning-fork (and thus not subject to measurement error, since I can hear the pitch change as well as
measure it). If we add in Horizon Mechanics (Mike McCulloch) to Alzofon's theory, what comes
out looks like it could describe reality pretty well. The horizon becomes a location where total
energy density goes to zero, and thus all matter waves must have a node, and no information can
pass through such a place where all waves have a node. Takes a while to get your head around that
concept....

One of the main problems in all this fringe physics is to decide which story is true, or how much is
truth and how much is either a lie or misinterpreted or wishful thinking. It's also necessary to
reconsider whether the text-book Laws of Physics are actually always valid or whether there are
limits of validity. We may also need to consider our picture of the fundamentals, and whether that is
close enough to the truth. The success of Mike McCulloch's Horizon Mechanics (also known as QI
or quantised inertia) implies that standard descriptions of the way the universe works are not what
happens in reality. The experimental successes there also bode well for us achieving cheap
spaceflight, as well as cheap energy. In the same way as Einstein showed us that Newton's laws 
were approximations that were valid for our normal situations but needed tweaking at high
velocities, I expect we'll find that Einstein wasn't the whole truth and that there will be exceptions
there, too. Possibly ways to travel faster than light, given that light itself seems to travel FTL within
the first 1/4 wave. If Fred Alzofon was correct with his field theory, and Mike McCulloch's horizons
can be produced by removing energy from the quantum field by an analogue of a refrigeration
cycle, then we could produce a small and separate universe within ours that is not subject to inertia
(or gravity) from the main universe, and thus that small separate universe has no speed-limit at all
within the large universe. That sort of thing would make interstellar travel possible. Hey - if we stop
dreaming about what might be possible, we wouldn't mess around with the anomalies.

**************************************************************************

Links to related subjects mentioned:
https://www.researchgate.net/publication/8393805_Communications_-
_Quantum_teleportation_across_the_Danube (Faster than light telecommunications by quantum
methods)

https://www.researchgate.net/publication/
335677198_Electronic_data_transmission_at_three_times_the_speed_of_light_and_data_rates_of_
2000_bits_per_second_over_long_distances_in_buffer_amplifier_chains (Steffen KĂ¼hn’s
experiment sending data FTL down a standard coax)

Search on “Ron Hatch Aether” to get details of the method used to correct the clocks of the GPS
satellites and the correct frame to use. https://beyondmainstream.org/scientist/ron-hatch/?
type=abstracts&pg=1#abstracts gives a list of papers (but have your tinfoil hat handy...). Einstein’s
Equivalence Principle is itself refuted by gravitational lensing (proposed by Einstein and, once it
was actually measured, used as verification of Einstein’s prediction that light is bent twice as much
as the simple theory would predict).

https://electricspacecraft.org/index.html# for Richard Banduric’s method for reactionless
propulsion.

With the demise of Revolution-Green.com, it’s necessary to go to the Web Archive to see what I
wrote there…. Hopefully the Disqus comments also show, since I put quite a bit of extra stuff in the
comments.

https://web.archive.org/web/20170826080700/http://revolution-green.com/displacement-field-drive/
is the write-up I did on Richard Banduric’s electric space drive.

https://web.archive.org/web/20180923221652/http://revolution-green.com/alzofon-gravely-go/ for
Fred Alzofon’s theory and initial experiments.

https://web.archive.org/web/20190801121827/https://revolution-green.com/conservation-ofmomentum/ for CoM

** need to find the WayBack machine links for these:

https://revolution-green.com/another-day-work/ was part of the series on thermodynamics.
https://web.archive.org/web/20191214064320/http://revolution-green.com/heat-move-hotter-colder/
The “silly question” of why does heat move from hotter to colder. In fact, in one sense it doesn’t,
though your fingers will tell you different.

https://web.archive.org/web/20190730150923/http://revolution-green.com/free-energy-by-simonderricutt/ Some early ideas on how to violate 2LoT that are still viable, but unpatentable and also
need micron-scale manufacturing so would not be cheap or easy.

https://web.archive.org/web/20170821053618/http://revolution-green.com/robert-murraysmith_ambient-energy/ The way RMS produced versions of the Lovell Monotherm.

https://web.archive.org/web/20200725135903/http://revolution-green.com/thoughts-proell-effectsimilar-ideas/ An analysis of claims to be able to violate 2LoT that I said wouldn’t work. Quite a
while has passed, and my prediction was justified. 

https://phys.org/news/2015-12-faster-nanoscale.html Heat transferred faster at nanoscale
Violation of Conservation of Energy by a rocket.
Consider a mass of 1kg, accelerating force of 10N, acceleration 10m/s², with the mass initially stationary.
This is just enough to get something rising upwards in standard Earth gravity at ground level. Ignore friction, air drag, and other losses. No relativistic corrections, either. Keep it as simple as possible.
Kinetic energy is 0.5mv² where m is mass and v is velocity, and the rate of kinetic energy increase is thus the differential of this which is 2(0.5)mv which is mv.
Distance travelled under constant acceleration is 0.5at², where a is the acceleration and t is time.
Work (= energy used) is force times distance in the direction of the force.   

Case 1 - apply the force in the lab frame of reference.
At velocity 1m/s, rate of energy used is 10Nm/s (10 watts or 10 joules per second), Rate of increase of kinetic energy is also 10 joules per second. 
At the point of measurement, the force has been acting for 0.1 seconds and thus we've travelled 0.5at² (0.5 times 10 time 0.01) or 0.05m, and we've done 0.5 joules of work (10N times 0.05m). Kinetic energy in the mass is 0.5mv² or 0.5 joules (since v and m are both 1).
At velocity 100m/s, rate of energy use is 1000Nm/s (1kW), rate of increase of kinetic energy is also 1000 joules per second.
Now we're 10 seconds in, we've travelled 500m, the total energy used in accelerating the mass is 10N times 500m or 5000J, and the kinetic energy of 0.5mv² is also 5000J.
 
No problem in case 1 - energy is conserved as expected. The force is applied in the lab frame, and the energy used is measured there too, and the energy used and the energy acquired by the mass exactly agree.

Case 2 - apply the force in the frame of reference of the mass. This can be done by using a rocket motor attached to it. Assume that the loss of mass of the fuel is negligible, which obviously won't be true but doesn't change the principle. This can be later adjusted to compensate for the rate of mass being ejected for a real rocket motor or jet, and either the increased acceleration as the mass reduces or adjusting the thrust as we go to keep the acceleration the same.
At velocity 1m/s, rate of energy used is 10 joules per second, rate of increase of kinetic energy is also 10 joules per second.
Again we've travelled 5cm, we've used 1 joule (0.1 second at 10 joules per second), but the kinetic energy in the mass is only 0.5J. Half the energy we put in has disappeared. 
At velocity 100m/s, rate of energy used remains at 10 joules per second. Rate of kinetic energy increase is however 1000 joules per second.
Again by this time we've travelled 500m, it's taken 10 seconds, and the total energy in the mass is 5000J but we've only used 100J in the rocket.
As the rocket goes faster, it produces more kinetic energy per second than is consumed to run the rocket. The kinetic energy is measured in the lab frame.
In both cases, the trajectory of the mass is exactly the same, the acceleration is the same, the kinetic energy of the mass is the same at the same points or times, and the two cases are exactly equivalent except for the amount of energy used.
Where does this extra energy come from? At low velocities, we lose energy, and in the extreme (when the mass is held stationary) the rocket produces no work which is far easier to explain since the energy is dissipated as heat.
Note that I took the force of 10N needing 10W to produce, and the actual force to power relationship may be different for different rockets or methods of propulsion, and this would change the point at which we're producing more energy than we're using, but not that there's a break-even point followed by an excess amount of energy produced than used.

If you attach the rocket with a string to a central point, and make the string long enough such that by the time the rocket comes back to the start-point it's reached 100m/s (thus 500m divided by 2pi which is around 79.58m), the rocket will deliver more kinetic energy than the integral of the energy that was expended in achieving that velocity. You don't actually need the string, but that makes the measurement happen at the same point in space that you initially measured the rocket as stationary and thus having no kinetic energy. The loop will take 10 seconds, you'll use 100 joules in the rocket, and produce 5000 joules after the first loop.

The problem obviously revolves around the frame of reference in which the force is applied. If we have a rocket expelling mass and thus producing a reaction force, and we hold the rocket stationary, we can measure that reaction force easily relative to the rocket which is stationary. If we let the rocket fly, that force remains relative to the rocket, and in the frame of reference of the rocket the rocket remains stationary, though from the lab frame of reference we see it accelerating off. 

The work done is force times distance in the direction that that force applies. In the frame of reference of the rocket, no work is done because no distance is travelled (yep, this is a strange observation but is actually true). In the lab frame, though, the rate of work done is not only easily seen and measured, but also varies as the instantaneous velocity the rocket is moving in our frame. That work done in the lab frame accumulates as kinetic energy in the rocket. That kinetic energy is a real quantity that can be measured in the lab frame (collide the rocket with some other large mass and measure the temperature rise). 

Thus it appears that rockets can create energy from nothing. We need two conditions to be satisfied, firstly that the force produced is relative to the moving object, and secondly that that object is moving fast enough in our frame of reference. As a first stab at this, consider the original Hero steam engine, with steam jets on the periphery of a spinning disk. The faster it goes, the more efficient it will get, and this could probably be done with a compressed air supply. To get a high peripheral velocity, may need a large diameter. The initial efficiency is pretty low, so getting it fast enough to get to OU might be difficult. It does however suggest that Schauberger may not have been mistaken.

The thrust produced from reaction mass is the mass per second times the velocity of ejection. Thus 1kg/sec ejected at 1m/s gives you 1N of force, as will ejecting 10g/sec at 100m/s or 1g/s at 1000m/s. The specific impulse of rocket fuels is basically the exhaust velocity (see https://en.wikipedia.org/wiki/Tsiolkovsky_rocket_equation ) and measured in seconds, which is the number of seconds that the fuel can propel itself at 1g acceleration ( https://en.wikipedia.org/wiki/Specific_impulse ). Since the velocity of ejection is measured relative to the rocket, if you eject the same amount per second at the same velocity, you get a constant thrust.
Rocket equations at www.rocketmime.com/rockets/RocketEquations.pdfTsiolkovsky equations (but not a secure site).

Thus if we eject 10g/s at around mach3 or 1000m/s we'll get our 10N thrust, and after 10 seconds our 1kg mass has reduced by 10%. This will need 0.5mv² of energy per second, so 0.5x0.01x1000x1000 or 5000J/sec. In this case, break-even as regards energy used to kinetic energy gained happens at around 500m/s after 50 seconds of firing, and we'll have used half the 1kg mass as fuel. The approximation of our mass remaining substantially the same doesn't really apply. However, if that thrust is there, and it's relative to the rocket, and the rocket is going fast enough, then there is still a point where the energy gained is greater than the energy expended.  

Interesting point about the rocket equations is that as you use less mass per second the energy needed to achieve the same thrust goes up. Thus at 1kg/sec ejected at 1m/s you use 0.5 watt and get 1N, but if you use 1g/second at 1000m/second you need 500W to achieve that newton. Might point to some idea using ball-bearings being ejected to produced the reaction thrust and then collecting them to recycle through the contraption. Might get break-even at a low-enough speed to be useful.

I thought of the Hero steam engine design, but then we need to accelerate the ejected mass up to the angular velocity at the tip-jets and that energy would also need to be recovered. Looks a bit marginal as to whether a practical design could self-loop given the losses. By the time you get to OU I expect the rotor to be spinning a lot faster than the ejected mass velocity, so you'd need a squirrel-cage around it to recover that energy, and it would also rotate in the same direction as the rotor. 

Of course the IVO design bypasses all this mechanical stuff and just produces a thrust, so a lot easier to make something that works. Break-even point is when the devices are moving fast enough to produce the power to run them, and since those produce 45mN per watt let's assume a 1m radius so we get 2pi(0.045)=0.2827 nM per revolution. Thus you only need 3.5368 revolutions per second to produce a watt. That's 212.2 rpm.
If we now take that radius down to 10cm then we need 2120 rpm. Not exactly much of a problem. Allow around 10% losses in the bearings and air drag and we probably break even at around 2335rpm, and at 3500 rpm we're producing around 1.5W to run our 1W driver so we have 0.5W left over for something else. Faster it goes, the more power we can get out. Yep, this seems ridiculous, but logically follows.

Maybe worth noting that the Becker and Bhatt experiment showed around 100mN/watt, which is around double the IVO thrust, so the power produced at any rotation speed would be around double this. On the other hand, they got this using mW, and the IVO design obviously fixed the problem of getting a big-enough and leaky-enough capacitor so it uses more power and produces a larger actual thrust. 

Looks like the limit of power out will be the point where the centripetal force gets too great and the disk breaks. Still, looks like you can count on maybe 5 times more energy out than in before stuff breaks.

This revelation somewhat surprised me. A few things I'd thought were impossible look to be pretty easy in the end. Just needs that flip of frames to make it work, and it's been under our noses all our lives.

Links:
IVO thruster https://ivolimited.us/press-release-ivo-ltd-introduces-the-worlds-first-pure-electric-thruster-for-satellites/
Becker and Bhatt blogpost: http://physicsfromtheedge.blogspot.com/2021/02/horizon-engineers.html 


The summary was not added before since I'm not looking for work. Enough people have tried looking at this profile to make some information useful. If you're looking for wine, I don't sell it (though might give you a bottle if you're here).

The degree in physics in '75 (nuclear+solid-state) was not good and I worked mainly in computer-related engineering jobs. Notable large companies were Raychem and Xerox, though there were also 3 start-ups. When the Xerox site closed in 2002 I took the early retirement option and moved from the UK to a vineyard in France, where I intended to sort out some of the physics paradoxes that had bothered me around 40 years ago. Some progress on that until LENR/Cold Fusion took my interest at the end of 2011. Since then I have been working on producing a cheap energy source in collaboration with friends in other countries.

Looking back, I've done a lot of jobs in my life. Here is a list:
Fork-truck driver, bandsaw operator for cutting steel tubing (SKF Steel), Timber-yard hand (British Rail), pottery engineer (ECP), computer operator, computer programmer, systems programmer (Datapoint, IBM 4343, IBM VTAM, IBM PC), communications engineer, computer installations engineer (Raychem UK), electronics design (QCL), tape drive repair engineer/customer service (Tallgrass UK), Failure Analysis, electronics design, firmware programmer (Xerox UK). In the meantime, I've also made a few guitars and other musical instruments and have gained a lot of diverse tools for making things. Also built a couple of houses with my father-in-law.

Current projects seem to involve doing things that should require proper lab equipment or a fab, so are a bit difficult to make on the bench here. It takes somewhat longer to build the kit than I'd like and getting the materials is a bit more difficult in an agricultural area. Still, it makes me realise the luxury I had at Xerox where I could simply get the design right and pass it to someone to build it. Oh well....

I haven't accepted any endorsements, on the grounds that the people endorsing haven't actually worked with me on those fields, thus don't actually know. The ease of applying endorsements and lack of verification makes me wary of those in general. I intend that my comments here are relevant and useful, and they are based on a pretty wide industrial experience.


September 30, 2019

One of the aims I had after being forced into taking an early retirement as the least-worst option available was to spend time thinking about the physics paradoxes I’d noted over the years, some of which had bugged me since I was a student. Of course, after a lifetime of experience I didn’t have any doubt that thermodynamics gives the right answers to whether things worked or not, but there remained that feeling that there were parts of the explanations of the reasons that couldn’t be correct. There were also some pretty basic questions as to why things are the way they are. After all, if you know what happens in a known situation that’s very useful, but if you know why it happens then you can predict what will happen in situations you haven’t actually tested, and you can design things that haven’t yet been built but ought to work.

It’s maybe time to have another go at explaining some errors in the basic definitions that have a very large effect on what we think is possible and impossible. We define energy as a scalar, which has a magnitude but no direction. When you have a certain number of joules of energy, and you add in some more, then the total is a simple numeric sum and there is no vector addition involved. (For vectors and scalars, look at the wiki if needed.) Work has the same units of joules, and likewise has no vector part. If you think about it, though, work has to have a vector part, since it is force times distance, and the distance has to be in one direction or another. Likewise, kinetic energy (as energy of movement) must have a vector part since you can’t have a movement without it being in some direction. There’s thus something wrong with the underlying definition here, since if you move something away from where it was, and then back to the original position, that can be regarded as being “no work done” and indeed any energy expended in doing in gets lost totally to heat from the friction or viscosity of the medium you’re working in. If you look at the halfway point, when the object has moved, though, you can assign a (scalar) amount of work as having been done depending on the force needed to move the object. Do the same amount of work to move it back to the original position and you’ve now done zero work – it thus seems obvious that work can be positive or negative at least, and of course if you add in a few more staging points then it’s obvious that work must be a vector in some ways and that when the total (vector) movement is zero, then you’ve done zero work. This argument is after all used in proving that Brownian motion results in zero work being done, even though here any particular particle may actually be in a different place but on average the system looks the same with the same centre of mass. That of course gets you away from the paradox that in a container of normal air there’s work being done all the time in shifting those dust particles, and yet the temperature doesn’t change. Actually, there’s also work being done all the time in the collisions of the gas molecules with each other, when you look at the repulsive force between atoms/molecules and the distances over which that force acts. Work is also being done in the elliptical orbit of a planet around a star, or the swing of a pendulum, but these are in the textbooks as classical situations in which no net work is being done over time, though at any point you can calculate at what rate work is being done.

OK, so the definitions are obviously a bit more flexible in actual use in the textbooks, and you need to dance around a bit in order to choose the right explanation that will match reality. Zero net work, when work is obviously always being done, does after all imply that work can be positive or negative, and that the direction is important. Since we’re pretty used to having to do that, normally we skip over this problem and if someone thinks there’s something wrong with it we tell them that this part of science has been settled for 150 years or more so there’s nothing to see, move on. I chose to discount that bit of hand-waving and dig deeper, since after all there’s a lot of time to think about such things when pruning vines. It took a long time to grope my way to a solution, since I also had some early learning of definitions to overcome. It’s harder to see when it’s in the language.

Similarly, heat is regarded as definitely a scalar quantity. There’s no direction associated with that, since you can add amounts of heat together or take them away and there’s no whiff of any vectors there. It’s just kinetic energy, which is defined as a scalar too. For potential energy, that’s a scalar quantity too, and we count it in joules. Using it as a scalar, all the calculations work, too. There’s no obvious evidence of a problem in the definitions such as sums coming out wrong.

However, it remains that everything is made of atoms and molecules, and thus that in order to have kinetic energy at all those particles have to be moving, and they will be moving in a particular direction since, without that movement, there’s no kinetic energy. Each particle thus has a scalar amount of kinetic energy and a vector amount of momentum, but the vector momenta will vector-add to zero overall. If they don’t sum to zero, after all, you either have a wind (in a gas), a current (in a liquid), or a moving bit of solid stuff, and such energy is easily-harvested and used to do work. For the moment, I’ll choose to discuss only fluids, and we can discuss solids later if needed since it’s a bit less obvious there. Still, in order to do work you need a single-direction force, and if your force is in random directions at a time-scale that is too short to resolve, then you can’t do effective work. Once you have a unidirectional force, or unidirectional kinetic energy to produce that force, then you can do work over a specific distance in a specific direction.

The important thing here, though, is that heat in that fluid must have a scalar amount of energy and a vector amount of momentum, where the momenta vector-sum to zero for the fluid as a whole but are definitely there when we look at individual particles. Heat isn’t a pure scalar, therefore, and this is really important and where there’s an error in the normal definition. Something that is there with a zero average size is actually there, and doesn’t cease to exist because its average is zero.

It’s well-known that the only way you can get thermal energy out of a volume of fluid is to have another fluid that is cooler and to allow energy to pass from the hotter to the colder fluid through a heat engine. It is denied that it is possible to simply convert that heat energy to useful work – and remember that work is force times distance, and that in the textbooks work is also defined as a pure scalar quantity even though that distance must be in a particular direction. It turns out that this standard viewpoint is not actually justified (or indeed justifiable in theory) since it ignores the momentum vectors that actually must exist for the kinetic energy to be there at all.

Funny thing is that potential energy is also regarded as a pure scalar. In a lot of cases, there will actually be a direction involved. For example, an electron will have potential energy in an electric field, and if released it will move upfield to the +ve side. If you release a weight, it will drop down to the ground, and if you throw it upwards in any direction it will end up on the ground at some point. A spring will push or pull in one direction. Chemical energy is often directionless when looked at at human scale, but may well have direction when looked at at the atomic scale. Certainly, when potential energy is converted to kinetic energy, it must be in one direction for each particle.

The definition error is in treating things as scalars when there’s a direction involved, and this means that we don’t see all the possibilities because they don’t show up in the maths we work out from those faulty definitions. If the foundations are faulty, then what you build on top will have an error. In this case, it’s resulted in an absolute denial that perpetual motion is possible, when it actually is possible in theory and has been shown to be possible by experiment (see Lovell device for one that’s easily replicated, though low-power). For the very large number of experiments looking for Free Energy, it has also resulted in people trying mainly ways that have no hope of working (gravity engines, magnet motors, strangely-wound transformers, motor/generator combinations, and so on) rather than trying things that will work. Just because you can’t measure that momentum vector with a wind-gauge doesn’t mean it isn’t actually there.

It’s really a problem with regarding an average as being the whole information, when averages always lose information about the subject. Here, the information lost is critical.

Changing the direction of kinetic energy (or indeed potential energy) does not necessarily require net energy to be put in. You can always put a figure on the work that is done, of course (and this is a point where I mistakenly said “no work is done” before and Abd spent a while arguing against that, and I should have said “no energy is exchanged” instead), but since work has to have a direction involved and is thus not actually a scalar, we really need instead to look at the energy exchanges and momentum exchanges in separate calculations. If you focus on the force times distance (work) for each particle it’s too easy to lose track. The big thing is that in a conservative field, a free particle will have the sum of its potential energy (due to the field) and the kinetic energy (due to its motion) as a (scalar) constant. The direction of the particle will however change, and this is a momentum exchange (between the source of the field and the particle) that is mediated by the field. Provided the field itself remains constant, then the scalar sum of the potential and kinetic energy will remain constant. However, the momentum vector will change to be more aligned with the field the longer that the particle is under the influence of that field. The ultimate example of a field changing momentum but not energy is a planet in a circular orbit. There, the kinetic energy stays constant, the potential energy stays constant, but the momentum is constantly changing direction.

Net result is that if you allow a particle to move under the influence of a field, then no matter what direction the particle enters the field, it will end up moving in one general direction down the field (note exception for radial symmetry field and a particle moving in an orbit, but it’s not that useful in harvesting energy), and this is how we recognise that a field is there and how we measure it. It’s very basic. Maybe however one of those things where we’re so used to the thing itself happening, and the maths working out, that we can’t see the errors in the definitions of the quantities involved. This changes the momentum of the particle, the kinetic energy, and the potential energy, but the scalar sum of the potential and kinetic energy remains constant.

Going back to our heat, it’s actually particles moving in random directions with scalar energy and vector momenta. If we use the right field on those particles, their directions will be changed, and we no longer have random directions but instead we have a wind or current, and producing that wind or current requires no energy to be put in but instead only requires a momentum exchange with the source of the field. It should be pretty obvious that when we have a wind or current of any type then that can do work – it has a force and a net direction, and so can move things in that direction. The problem of de-randomising all those random-direction particles is insoluble unless you realise that fields affect the momentum and not the total energy, and that to de-randomise the particles you don’t need to put any energy in. If you look at the work involved, of course it looks as if you’ll need to put energy in to do the work (since work and energy are both measured in joules and are often exactly the same thing) and thus it would be a zero-gain process, but since it’s only a momentum exchange then that problem does not exist.

The important point here is that we can change the direction of the momentum of a particle without needing to put energy in, even though we can show that there’s a force needed and a distance over which it has operated. A momentum exchange is not the same as an energy exchange.

Note that this way of producing a net directionality from a random-direction collection of particles does not require a second colder heat sink. It simply converts the heat energy directly into usable energy that is now directed rather than random. If you regard kinetic energy (and thus heat) as a scalar then this would be impossible, but both do indeed have that vector component of momentum, because they are not a continuous thing but depend on particles actually moving in a particular direction. Those momentum vectors can be changed by a momentum exchange alone, and the use of the correct field in the right configuration will allow a single container of material in thermal equilibrium to do work, and in the process lose temperature.

Another way of looking at the action of the field here is that it changes the distribution of the probabilities of any particle moving in a particular direction, and of course any other process that can change those probabilities of the momentum vectors being more in one direction than another can do the same job of converting heat into useful work or useable energy. The energy doesn’t go away when we’ve “used” it to do work, after all, it’s just that the momentum vectors get randomised. Thus using a field to de-randomise the momentum vectors isn’t the only possible way of converting heat to useful energy, and any way of producing that “diode” will also work.

That was the paradox I was really working on at the start, that while there is Conservation of Energy that applied to everything, when it comes to working with thermodynamics there will always be losses of useful energy. When you do work that takes 10 joules, you use up 10 joules of energy and yet you still must have 10 joules left at the end, but as heat rather than in your battery as chemical energy. That leads on to the question why things get randomised, and so we get down to Heisenberg’s Uncertainty Principle (HUP) as the underlying reason, but you can’t currently dig deeper than that as to the reason for the HUP. We don’t know. However, if that was the only thing happening, this universe would be a soup of particles in constant motion and maximum uncertainty, whereas what we see are planets orbiting around stars, certain places to go to mine certain minerals, and a whole lot of actual order in things. The order is introduced by the fields, with electrical forces holding together atoms and molecules and crystals, and gravity holding the planets together and in orbit around the stars. Maybe magnetic fields having some extra ordering effects, especially on plasma streams in space. If the fields didn’t produce order, we wouldn’t be here. Maybe that’s a somewhat-large omission in the standard viewpoint of How Things Work. Fields produce order. There isn’t simply a one-way trip to disorder built-in to the universe, but also the fields countering that disorder. No yin without yang.

What we end up with is that whereas the HUP produces a tendency to randomness, there’s also a tendency of conservative fields to produce order, and though we use that tendency to produce order in a lot of processes we deny that that tendency to order could be larger than the tendency to disorder. If you get the field strong enough, and get the design right, then it can be, and this is why solar panels actually work – they produce more order than disorder, and output useable energy in the process rather than requiring it to be put in.

Since we start with random momenta, then if we’re using a field we’d need to produce the particle within the field we’re wanting to use. One way that is easy to explain is producing photoelectrons within the depletion zone of a PN junction, since the electric field there is strong and the majority of such electrons will be swept to the electrode in the +ve direction of the field. The electrode thus is charged relative to the electrode the other side, and current will flow in the external circuit. Basically, almost a standard photvoltaic cell except that we need a semiconductor with a small band gap. Not something that most people could produce in the back shed, but definitely a technical problem rather than being impossible. Other ways are possible using tunnelling probabilities.

Once you’ve actually seen this you’ll likely wonder why it was so hard to explain. You’ll probably also wonder why other people haven’t seen it too. It does however need some digging down in the basic definitions that we were all taught and thus accepted as being the truth, because there were no obvious devices around that contradicted it. That may change in the next few years anyway, and it’s likely you’ll be using devices that turn ambient heat into electrical energy you can use. I think it’s reached that point where someone will produce a cheap one with a useful amount of power (rather than just measurable and costing too much anyway). Of course, there will probably still be people who claim stuff they can’t prove, but it’s pretty easy to tell the difference. Just get the data...



June 22, 2018

"Simon Derricutt is why this site [Revolution-Green] exists and people bother to read it. His insights into energy theory coupled with practical research and experiments makes him a well qualified to make public comment on the subject. So i have taken the liberty to publish one of his Disqus comments with a little edit." -- Mark Dansie


FREE ENERGY DEVICES AND PAST CLAIMS

There are many free energy inventions published on the net.  Many sites publish various plans on how to make them as well as assurances that they actually work. The strange thing about these claims is that nobody actually has a working model where the input energy and output energy can be measured and thus prove the device to be OU. Over at Stefan’s site (overunity.com ) there are people who have been trying to make these things for years, and enthusiastic (and competent) people have done replications and measured the results. So far, no-one has managed to make anything that self-runs, or puts out more energy than was put in when the measurements are competently done.

Based on the experimental results, it seems certain that the principles that are being tried here, and the theories that support those principles, are simply wrong. Nature doesn’t work that way. The originators of the ideas didn’t understand the way Nature works, and either got their measurements wrong or lied about them when testing things out. People like Lindemann, Beardon, etc. must know that their machines don’t actually work as claimed, since otherwise they’d be running their own houses and workshops from them and would have stopped paying electricity bills.

Of course, there’s no guarantee that standard science has all the answers, and so things we think are impossible today may turn out to actually be possible tomorrow (or in a few decades, anyway). However, one axiom that is important here is that if you do the same things you will get the same results. We haven’t found an exception to that over the history of human endeavour. In order to get a different result, you need to do something significantly different.

LOGICAL ERRORS IN DERIVATIONS

Over the last few years I’ve been digging in the foundations of physics and have found some logical errors in derivations that suggest that the ancient dream of Free Energy may in fact be possible, even if it’s not quite as straightforward as winding coils, spinning flywheels, or making linked pendula. This requires (of course) doing things somewhat differently than has been tried before, and I’m working on those experimental tests. It’s taken a fair amount of study of the standard known science and experimental results, and I’m not yet in a position to prove success. Maybe this year…. The main point about what I’m doing, though, is that it has a solid base in standard theory with the observation that it misses noticing some fine points. For the 2LoT work, the standard observation is that things tend to disorder, and this is true. However, if there wasn’t also a tendency to order available, then we wouldn’t be here to observe anything. That tendency to order is caused by the fields such as gravity, electromagnetic, and nuclear, and if we make the field strong enough then it becomes more powerful than the tendency to disorder. We can exploit that, and we do that most obviously in solar panels. For the CoM work, we automatically think of objects touching to transfer momentum (or in fact an impulse) but again that force times time can only be transferred by a field and the atoms do not in fact touch. The fact that the force can only be transferred by a field, and that a change in that field can only propagate at the speed of light, means that if we can get the phase and distances right then momentum itself may not be conserved. If CoM is violatable, then it follows that CoE is also not absolute.

A LOGICAL ROUTE TO ACHIEVING PERPETUAL MOTION

There is thus a logical route to achieving perpetual motion and indeed OU, but the historical methods of trying to achieve it have failed every time they’ve been tested, so the various Free Energy machine designs available on the net tell you what not to do rather than what to do. At a basic level, you should read them to check if your own Free Energy idea has been tried before. If it’s been tried before, then almost by definition it failed, since such a machine would be both valuable and would have been replicated by anyone who had the capability. After a few years after publication, they’d be all over the place and you’d be able to buy them in the local hardware shop in the same way as today you can buy solar panels.

Of course, the ideas I’ve put forward are not easy to implement, and the 2LoT stuff does need some tighter definitions of the words we use such as work and energy, as well as noticing that fields produce order. Since the logical conclusions go against what we’ve learnt as truth and can sense using fingers, it seems few people have understood the principles and instead reject them as incredible (and wrong). The CoM digs deeper into what is intuitively true that momentum is always conserved – we’ve known that is true for around 3 centuries, so finding there’s a loophole is just a tad surprising. I even used the absolute truth of that in deriving the 2LoT work, and as it happens in that situation it is exactly true since the fields we’re using are non-varying.

IT’S CHEAPER AND FASTER TO LEARN FROM SOMEONE ELSE’S MISTAKES

The field of Free Energy is over-full of charlatans and of people who get measurements wrong, as well as people who have ideas about the way reality works that are experimentally proven to be not true (after all, if the ideas were right then their experiments would have been successful). Some people (such as Vinyasi, here) believe that they are being told the truth, and thus spend time trying to learn the theories and develop them. Instead, it’s better to look at what actually works where a replication has been done and is successful. For most of the Free Energy ideas, they’ve been built by many people and simply never worked in the sense of delivering Free Energy. You can learn from your own mistakes, but it’s cheaper and faster to learn from someone else’s mistakes. If you can find a Free Energy machine where experimenters say it works for them, and replications work, then build it. Problem is, as far as I can see there are no such projects, just people who hope that their idea will actually work this time. It remains that to get a different result, you need to do something significantly different to the non-working idea.
-----



July 5, 2017

MAGNETIC FIELDS ARE INHERENTLY CONSERVATIVE...

...so basically you get back exactly the energy you put into them and thus magnet motors (as usually understood) will not produce continuous energy out but only return most of the energy you put in to start them running. What they can do, though, is provide a directionality to where things will go – they can change momentum in a loss-free manner. Like electrostatic fields, gravity and nuclear forces (all conservative) they can be used to sort random directions into a single direction, and thus make random-direction kinetic energy into a unidirectional stream that we can then use.

I’ll be explaining this at more length in the next article, and maybe this time I’ll manage to explain things well-enough that a few more people will understand the dance of energy-transfers and momentum-transfers that can be de-randomised by correct use of the force-fields we already know about. Although the principle that things become more random is easily-demonstrated and applies to most situations, in the presence of a force-field it is modified and becomes anisotropic. In most cases this anisotropy is irrelevant and we don’t notice it, but with certain selected cases it can become both very obvious and useful – for example with a solar panel where an inbuilt electrical field makes the electrons go one way and the holes the other and thus gives us unidirectional electrical power from random-direction input photons. I’ll explain how a magnetic field can be similarly used to provide a preferred direction to electrons. Gravity obviously de-randomises interactions as well, since otherwise we wouldn’t see the stars in the sky (and wouldn’t be here to discuss it, either). I can’t see a way of using nuclear forces this way, though, since it’s not a long-range force relative to the scale of the atoms.

BEATING  THE SYSTEM

As far as I can tell, the engineering required to “beat the system” is beyond most back-shed experimenters’ facilities, since the scale required is of the order of microns or nanometers down to atomic for some things, so requires some special kit which is somewhat expensive and hard to make. Putting a few magnets in a special arrangement and expecting it to just turn on its own just isn’t going to do anything – you need to think about what random energy can be redirected by the system and why it will happen.

As far as we know, we cannot make or destroy energy. That also means that there is energy all around us, but we can’t use it because it is going in random directions. Our methods of “producing energy” thus involve either turning mass into energy and utilising the directionality of that random energy when it moves to an area of lower energy-density (the hot to cold direction) to get some work done, or in finding some directional energy (wind, waves etc.) and tapping it to get some directional energy to do work. There have been a few systems that change heat energy directly into electrical energy, but they have been at a low power-level and thus not useful in practical terms. Fairly soon we should have some devices that do this at a useful level of power.
-----



February 28, 2017

Back to Work

This is a continuation of the series of Free Work essays. Thanks to Abd for reading and responding to the last one with the points he found unacceptable, and I hope I’ve fixed those points this time. It’s also a lot less words, since I’ve missed off the history of how I got to this point.

A recap:

Mass is a fixed form of energy, and the two are connected by Einstein’s equation E=MC². I’ll generally use the term mass/energy when I need to refer to the “stuff” the world is made of.

KE is kinetic energy, which is energy of movement. Heat is random-directionality KE.
PE is potential energy, which is stored in a way that can be released.
KEWork is work that is done where the energy goes into kinetic energy of something.
PEWork is work that is done where the energy is stored in some way as potential energy in the system.
Displacement Work results in a different configuration of the system, but does not store any energy or use energy to do it.

The sum of mass, KE and PE is conserved. There is no Free Energy, and we have to use what we have. We normally convert the PE of mass into the KE of heat to power our world, but we’re also using renewables where we divert a flow of environmental energy (solar, wind, waves etc.) for our use.

Looking further, we can see that KEWork is actually just the kinetic energy in the object of interest (say, a car moving) and that similarly PEWork is just the PE embodied in the object of the work, for example a brick has moved from ground level to the top of the wall. The terms “energy” and “work” only serve to determine whether this is energy coming in to the process or energy going out of it. In Displacement work things are a different shape or format – if you’re shaping a lump of steel by hammering it there’s no change in mass or potential energy when it’s the right shape, and all the work hammering it has gone into KE as heat in the environment. Though we have to put energy in to change that shape, it is not stored in the thing itself. When we move something from one location to another, there may be a change of gravitational energy involved, but the energy that is not stored there all goes into friction (heat returned to the environment) or other losses (which also mostly end up as heat in the environment). Moving something from one place to another at the same gravitational potential takes no net energy at all – it’s Displacement work and all energy put into the work goes into the environment as heat. As usual, this is a generalisation, and moving a charged object in an electric field will also store or release energy, but here I’m just concentrating on the simple mechanics rather than covering all the possible variations.

When we do work, therefore, we change the locations of mass/energy in our world, and since mass/energy is conserved we have exactly the same amount of mass/energy at the end as we started with. It looks a bit pointless when it’s been logically reduced that far. However, that mass in a different location may be you arriving at work, or coming home again, so it is actually pretty important. If you start off from home in the morning and return there in the evening, then overall that is no net work done, but our lives wouldn’t work too well without it.

From an energy viewpoint, all the work we do is simply changing the configuration of mass/energy around, and there is no gain or loss. Work is simply the name we give to the output configuration of mass/energy.

In our normal experience, heat energy always spread out from its source until all things are at the same temperature. Hotter things cool down and colder things warm up, and heat energy moves from hotter to colder. In order to produce a local hot spot we need to put energy into it, and though we call that doing work it is simply moving energy to where we want it. Heat is transmitted in two main ways, conduction and radiation. Conduction is by physical collision of molecules or atoms, or through the bonds between them in a solid. Radiation is however a quantum process and is covered by the Stefan-Boltzmann law. A body will radiate in proportion to the 4th power of its absolute temperature, so at any temperature above absolute zero it will radiate photons and this does not depend on the environment it is in – it will happen no matter what.

Whereas we are taught that for a system in thermal equilibrium there is no heat transfer, the Stefan-Boltzmann law tells us that all of the bodies are radiating heat as EM waves. For any one of those bodies, to remain at the same temperature means that it is receiving exactly as much energy as it is radiating, and we know exactly what amount of energy it is radiating. We thus know the radiation density available in any bandwidth.

If you can accept that this statement is true, then the methods of converting that flow of energy into usable electricity will make sense. If you deny it, though, and say that no energy is being radiated in thermal equilibrium (some people do say that…), then you’ll encounter various logical problems in considering a system that is physically large (and you’ll also have problems believing that a thermal camera can work).

Any electromagnetic wave (EM wave) can be received and turned into an AC electrical wave by use of the correct size antenna. This AC can then be rectified into DC by use of a diode. The combination of an antenna and diode is known as a rectenna, and for the THz radiation of room-temperature IR it is very small and is thus known as a nantenna. Any IR in the right bandwidth that hits that nantenna will be converted at some efficiency (less than 100% but more likely in the 5-10% range) into an electrical output.

We’ve seen that at room temperature there is a continuous flow of IR energy, and with most objects having an emissivity of around 95% in this range it’s not far short of black-body intensity. If we put the nantenna into this environment it will generate power from the incident IR even if it starts off at the same temperature as everything else. We can connect the nantenna output to a load such as a resistor and the resistor will heat up. By any usual definition, this is doing work, and any other use of the electricity would likewise be work. Since the nantenna must now be radiating less IR than it receives, because by Conservation of Energy (CoE) some of the incident IR is being turned into electricity and so there is less to keep the nantenna warm, if we measure the temperature of the nantenna it will be lower than the environment. There is thus an obvious reason for heat energy to go into the nantenna array – it’s colder.

The nantenna is simply redirecting the incident random-direction photon energy as unidirectional electrical energy in the connecting wires. There are no losses of energy and no gains here (CoE again), but just energy going in a direction we choose rather than being in random directions.

We already know that most of the work we normally do will end up as heat in the environment, apart from what we store as PE in lifted weights, springs or batteries etc., so if we do work using the power from the nantenna then for most types of work the energy will return to the environment immediately which will thus remain at the same temperature. If we do PEWork, then that amount of energy will be retained for a while and returned later (maybe much later when the house falls down) and the environment will cool a bit. The amount of energy in the environment is however vast, being approximately the specific heat times the absolute temperature, and from the Sun we receive around 100 times as much as we currently use in total. This resource is practically inexhaustible.

What we have done here is to set up an energy loop:

Environmental heat => IR radiation => nantenna array => electricity => work in load => heat energy => environmental heat

Alternatively: random direction EM KE => unidirectional electrical KE => random direction thermal KE => random direction EM KE
where thermal KE is the temperature of the bodies as measured with a contact thermometer and EM KE is the electromagnetic radiation that that warm body will radiate according to the Stefan-Bolzmann law.

This loop will continue to work as long as the devices last. It doesn’t need fuel to keep running, since it is using the radiated energy in the environment and redirecting it.

Et Voila! Perpetual Motion!

If you wish to read a longer version of this that goes into more history, go to http://revolution-green.com/theory-perpetual-motion/

The irony is that Free Energy machines in the past have tried to produce energy from nothing, which is why they don’t work (energy is conserved). Using environmental energy, on the other hand, works because energy is conserved so you can’t lose it. All we need to do is to convert random energy to directional energy and we can use it again and again, and we already have devices that do that.
-----


February 19, 2017

(updated 26 Feb and 9th Mar 2017) Since Mark decided to repeat a previous article in this series, it’s maybe time to put the whole series into one article here. If you want a shorter version, it’s maybe better to look at http://revolution-green.com/back-to-work-again/ first and then come back here to get the historical stuff. There are a lot of words here to get through, after all. I have been somewhat surprised at the lack of response overall, though. After all the fraudulent claims of overunity and Perpetual Motion we’ve covered here, I put the logic of how to make a real Perpetual Motion system up and some examples of how to design devices that actually do this around 9 months ago and it seems no-one experimenting in Free Energy noticed that it does all they wish their (non-working) machines should do. I still get people asking about buoyancy systems and magnetic motors, and if you understand the physics here and how Energy moves around and how Work is done then it’s very easy to show that nearly all the “suppressed technologies” can’t work. There are a couple of “lost technologies” that may have used these principles, in particular the Manelas device which was said to deliver 60W or so and was seen to be around 5°C below ambient temperature.

Put in a nutshell, nearly all the systems we use to produce Work do that by a movement of energy from one place to another, since that’s how we know we can get Work done. We measure the Work in newton-metres, a force times a distance, so we are predisposed by the language to think that way. This linear movement can only ever deliver a set amount of work for a set amount of energy that moves, and that observation is encapsulated in the 2nd Law of Thermodynamics.

It is however possible to make the energy travel in a loop, and then we can deliver an infinite amount of work for the same set amount of energy, providing the work done does not store energy.

If you understand that sentence, then there’s no need to read any further. The rest of the explanations here merely add some meat to the bones and say the same thing in different ways in the hope of finding the explanation that makes sense to you, the reader.

There is a lot of energy available in ambient temperatures so, even with work that stores energy, we can produce such work without needing fuel of any type. This energy-loop is conceptually quite simple to set up, as well, but does need some fabrication techniques that are *difficult* to do in a home workshop and where the kit is expensive to buy.

Over the past few years my time in the workshop was diminished because there was a need to care for my mum, whose Alzheimer’s disease got gradually worse. The last year was pretty bad. For this reason, I couldn’t put this theory into practice. In the meantime, I put the theory up in case any of the Free Energy experimenters wanted to make something that actually worked for a change. This year, however, I will have the time and a lot of help in getting the fabrication done, and we should thus see Free Work from a practical device. There are of course some others working along the same lines, and so we may see several alternate systems this year. I’m not telling you my design here, but instead the principles required, since there are diverse ways of achieving a working design and your idea may be better than mine.

I was intending to use this dissertation to apply for a DPhil, but it seems that there are residence requirements and that I’d have to take the course (and there isn’t a course because this is heretical philosophy) so I can’t annoy academia by a direct attack on the sacred cow of 2LoT. As such, I’ve just sent the document to a few people who can use it and I’ll publish it here. Despite being somewhat heretical, there have been no visits from MiB or others trying to suppress the ideas, and most of this has been on R-G for quite a while without any government reaction.

Thanks to Asterix, who pointed out an inadequate explanation of one point. This is explained by applying Conservation of Energy considerations. Mike Frost also engaged in the conversation, though maybe didn’t get the definitions I stated. Ken Rauen contributed to the thinking here and even though his engine ideas will not perform as he calculates, his intuition was correct (but joining with Mark Goldes was a baaad idea). To get something that works, you need to get both the maths and the physics correct, and his design doesn’t loop energy. Phil Hardcastle’s Sebithenco idea, using a standard thermionic valve (tube, for the US speakers), produces a small but verifiable amount of power from a single heat-sink in an experiment that is easy to replicate. To develop a new idea, it’s necessary to stand up and say what you think, and then see what the sceptical response is. Most of the time, the sceptics will be right and point out something that’s been missed, but sometimes the objections are not valid. Discussions are essential, so thanks to those that actually discussed the idea rather than dismissed it.

Thanks also to Abd, who demonstrated that I haven’t explained the basics and definitions well-enough for someone who hasn’t been immersed in 2LoT as long as I have. Since I didn’t define what work is normally understood to be, my redefinitions of the subdivisions of work didn’t make sense as to why I was doing this. I also didn’t define how we normally use a heat engine to get work out of a difference in temperature, so the point about using only a single heat-sink, and why this is important, is too easy to miss. I have a tendency to skip over intermediate steps in logic that seem obvious to me, and I’ve thus added in some extra paragraphs to cover the steps I realise I’ve missed. There may still be others I haven’t noticed, so if I’m made aware of non-sequiturs I’ll try to fix them.

OK, that’s enough foreword, and let’s get to the explanations themselves.

A dissertation on Work, Energy, and Perpetual Motion

“But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation. ”

— Sir Arthur Stanley Eddington, The Nature of the Physical World (1927)

This work is a summation of essays published on the net over the last few years, on a website (Revolution-Green.com) that started by debunking claims for “Free Energy” machines. Seeing all the frauds made me want an easy way to show absolutely whether such a machine would work or not, and in the process I examined the confusions between energy, work and power in common parlance, and often in more scientific discussions too. I also saw the blind-spot in the scientific consensus, since if there is a wave there is energy flowing, and a flow of energy is necessary to do work – energy in the common meaning is in fact a relative term. Why then could we not use the energy-flows in the environment to do work without needing to burn fuel? Though this appears to be forbidden by the 2nd Law of Thermodynamics in a lot of circumstances, we have a flow of energy and there seems to be no bar to doing so except for that Law. I needed to explain the concepts in a way that ordinary people should be able to understand, and thus stop wasting their time trying to recreate some “suppressed” Perpetual Motion machine that never worked. In the course of that, I found that a Perpetual Motion machine was not actually forbidden by nature itself, but was simply postulated as impossible because no-one had managed to build one that worked. This is logically indefensible. Understanding the reasons that energy moves around and what work is enables the design of a real device, and I shall endeavour here to explain the logic and show a practical and proven design. Since this is basically a work of philosophy there aren’t many equations, and because of the original audience there are a lot more words than absolutely needed, with repetitions now and again. There’s not really much need for the glossary or appendix, either, since I would expect all people reading this text to know all the basics or know how to search for them.

We’ll start with the paradox. There is the principle of Conservation of Mass/Energy, and since mass and energy are equivalent via E=MC² then in a closed universe we can’t change the amount of “stuff” we’ve got (mass/energy) but whatever we do we will end up with no more and no less. If we then look at a normal physics or engineering explanation of a process, we are told that we start with X joules of energy and we can do a little less than X joules of work, and then we have no energy left. This obviously doesn’t agree with the aforementioned CoE (Conservation of Energy). We really have a semantics problem, since the words we are using are not adequate. This will shortly be addressed, but before that I need to point out a misconception in our models for thermodynamics that, if it were true, would itself make such perpetual motion impossible.

2nd Law of Thermodynamics (2LoT)

Clausius statement: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.

Kelvin statement: It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.

Planck’s proposition: It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.

Back in 1824, Sadi Carnot worked out the start of the theory of thermodynamics that we still use (because it works for classical devices). He regarded heat as a fluid, that will flow from hotter to colder and not the other way. That unidirectionality of the flow of heat is critical to how we think of thermodynamics to this day, but it is not an accurate model.

Carnot envisioned a heat engine that took heat energy from a hotter heat-sink and, by using alternate adiabatic (where no heat is transferred) and isothermal (where heat is transferred) changes of volume and pressure of a gas, the engine delivered work equivalent to the difference in temperature between the hot heat-sink and a cooler heat-sink. In this way, an idealised Carnot engine would translate all the heat-energy difference between the two heat-sinks into available mechanical work. Although this ideal engine cannot be physically realised, since real devices have losses, all subsequent devices have been considered as having a hot sink and a cold sink and can be compared to this “Carnot efficiency” to determine their departure from the ideal. Devices that use a single heat-sink only are not considered as possible, since there’s nowhere for the heat to go to. Though such devices exist and work, analysis of them is performed as if they have two heat-sinks available, and so the loophole in 2LoT is never seen.

These days we have more understanding of atomic theory and that heat is actually just random kinetic energy of molecules. See https://en.wikipedia.org/wiki/Kinetic_theory_of_gases for a good explanation, and see https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law for what happens with radiated heat. When we have real molecules exchanging momentum/energy in collisions, it should be pretty obvious that (a) regarding heat as a fluid is an approximation and (b) heat is not unidirectional from hot to cold but instead multi-directional and a colder body can and does radiate/conduct heat to a hotter body. The large number of random transactions in any reasonably-sized system means that the net flow of energy is always from hotter to colder, of course, but if you replace the colder body with one that is colder still, you’ll see that the hotter body cools more quickly, and that is because it’s not receiving as much heat energy from the colder body. It does not (and cannot) “know” what other bodies are in its vicinity to radiate to and cannot thus adjust the amount of energy it is radiating to make the calculations give the right answer. That would imply an awareness of what is receiving the heat that is radiated and would of course also break causality since those other bodies could be very far away.

It’s thus time to discard that idea of heat as a fluid and use a model that corresponds to reality. In reality, if we have two bodies of the same temperature (or a system in thermal equilibrium) then we can easily calculate how much energy is passing between those bodies. The Stefan-Boltzmann law gives us the radiated emissions from each body, and if there is a conductive path as well then it should be obvious that the amount of energy conducted out of a body is related to its absolute temperature, and that Newton’s law of conducted heat simply gives us the net flow by looking at the difference in temperature. If we have a flow of energy, then we should be able to get work out of that flow, providing we use the correct techniques. All that’s stopping us doing this is a semantic problem and a belief that 2LoT can never be broken – and of course that the device that actually does this is somewhat difficult to make. Later on in this dissertation I shall show that such devices have been made and do function, and that the 2LoT thus has a useful (and proven) loophole which will give us usable power from a single heat-sink (the environment) which can reasonably be regarded as Perpetual Motion of the second type, given how large that heat-sink is.

When a body is in thermal equilibrium with its environment it is still radiating and conducting energy away at the same rate, but its temperature does not change because it is receiving an equal amount of energy from the environment. Above absolute zero, therefore, there is always an energy flow available. Whereas with conduction it would be difficult to tap this energy flow since it is only visible and available at the atomic scale, with photons we have at least two methods of converting these to electricity. With electricity, we can do work.

Since a body above absolute zero radiates heat (Stefan-Boltzmann rule) and we can intercept that radiation and convert it to electricity, it should be obvious that though the Clausius statement is invalid in that heat is radiated from a colder body to a warmer body without anything else happening, it doesn’t itself say much about what happens and it is the interpretations of it that actually are inadequate since they only consider heat energy travelling between two heatsinks from hotter to colder. The Kelvin statement is falsified, and Planck’s proposition can be shown by experiment to be wrong. A single heat-sink will radiate energy, and we can get work from that radiation even if it is the coldest body in the local group. Further down in this text, we’ll see some practical methods for doing this.

The semantics problem

One of the big problems in this field is that the language we use is not precise enough, and Energy and Work (which have the same units) are often used interchangeably. I’ll thus start by defining the language I’ll be using here, since a degree of pedantry is needed.

I’m going to separate the words out. We have two forms of energy here (kinetic and potential) and we also have work. Potential energy (PE) includes mass, energy that is stored in springs of some sort, gravitational potential (may be stored as mass, but I’ll skip that question for now), etc.. Kinetic energy (KE) is stored as things that are moving, so we include photons, moving masses and other unbound energy not stored as mass. Work is on the other hand a bit trickier. Whenever we do work, at the end of it things are just in a different configuration than before. Some work goes into kinetic energy and/or potential energy, some of it is simply that a lump of stuff is a different shape or location. Hammering a lump of iron into a sword (or ploughshare) takes a lot of work, but at the end of it we have the same lump of iron in a different shape. Work is not a conserved quantity, and that is an important observation that is central to this thesis. It is so important that I’ll add it in bold:

Work is not a conserved quantity.

Generally, when we’re talking of how much energy we’ve got, we don’t look at the whole amount available, since that would produce ridiculous numbers when we count up the number of joules of mass/energy in the various bits of stuff we’re looking at. Instead, what we look at is a local excess of energy over another place (generally the environment) and call this what we’ve got. The total energy in a litre of Diesel is massive, but we look at what we get from combustion of it (that converts a very small quantity of mass into KE in the form of heat) and how much more heat we have than the ambient temperature. To convert that heat energy into work we need to let that excess KE move into the ambient and we harness that movement to do the work of moving our car from one place to another. Since we start from stationary and end stationary (and normally end up in the same parking-spot at the end of the day), in fact all that KE we’ve liberated by burning ends up as heat in the atmosphere and we’ve actually done no work at all, not even Displacement Work. It gets a bit complex when you really follow where all the energy goes to and what has really happened. The simple statement of “we had 10 joules of energy and performed 9 joules of work with it” is imprecise. We need some words that specify where that energy actually went.

I’ll try to restate all this a bit more simply, though. The sum of KE (energy of movement) and PE (energy stored as easily-convertible mass) remains constant no matter what you do. We get work done through the movement of energy from one place/body to another. Energy is conserved, but work is not , even though we use the same units to measure it. The work that is done can be divided three ways into KEWork (where the energy is stored as kinetic energy), PEWork (where the energy is stored as potential energy, which is some form of mass), and Displacement Work where no energy is stored in the work done and can thus not be recovered either.

Summary:

KE is kinetic energy, which is energy of movement.
PE is potential energy, which is stored in a way that can be released. I see the energy stored as mass since that is the only available store.
KEWork is work that is done where the energy goes into kinetic energy of something.
PEWork is work that is done where the energy is stored in some way as potential energy in the system.
Displacement Work results in a different configuration of the system, but does not store any energy nor use energy to do it.
A real system that turns available PE or KE into work will normally produce all 3 types of work in varying proportions. In the course of this the energy will move, and it is from this movement of energy that we get the work done. It’s worth pointing out that what we see and measure are frame-dependent and that observers in different frames of reference will disagree on the absolute values but will agree on the amount by which the values have changed. It is thus the changes that are important here, though normally we’ll agree on the absolute values too since we’ll be in the same frame of reference.
KEWork is actually just KE that is on the output side of the process we’re considering, and so is the same as KE. Ditto for PE and PEWork. Whether we call it energy or work is a matter of perspective. However, when we hammer that lump of iron, the KE that we put in all ends up as heat energy in the environment, which is also KE. We’ve just changed unidirectional KE into random-direction KE, and we normally call this energy “losses”. We can’t however lose energy, since it’s still there, and it’s simply in a form that we previously didn’t know how to re-use.

If you burn fuel to get that energy excess, then allow that excess to go to the environment through a heat engine, and then do Displacement Work with it, then all that fuel energy goes into heat in the environment, which is random directions of KE (so in fact the energy from the burnt fuel translates into KEWork, and that energy doesn’t actually disappear). Since this is in general low-grade heat, we have no heat-engines that will use that slight difference of energy from the environment to do any more work with it. Since we are taught that we can only get work out from a difference of temperature, we don’t see that the temperature itself is a store of energy that can be utilised, providing we have a method using only one heat-sink.

Work is the output energy of a process, when the configuration of mass/energy is different than before we started the process. Energy will naturally move from a higher concentration to a lower one, and not the other way round. If we want to make a concentration of energy, we have to do work to move it, and of course since work is energy then we’re just using an even higher level of directional energy to do that moving. This seems an intractable problem since we always need a higher level of energy in order to increase the density of a lower level, which is why so far we use the conversion of PE into KE (or mass to energy, in other words) to power our society. This can give us the higher density of unidirectional energy that we need to get work done.

A local concentration of kinetic energy will naturally spread out until the energy density is even and without any local high concentrations. A hotspot in any object will spread out until the temperature becomes even throughout the object, given enough time. Potential energy will also head towards the lowest possible level, releasing kinetic energy in the process. This can be seen using water – pour a glass of it into a bowl and you end up with a flat surface where no point is higher than another. Pour the glass of water through a turbine and you can get work done in the process, but since you may choose not do this the work obtained can be anywhere from zero to (almost) the available excess energy when the glassful gets poured. This natural spreading of energy is because if we add energy in one place in a container (say as hot gas) then each hot molecule will undergo a random walk that will, over time, give an equal probability of being in all places it can physically get to. Much the same using photons, except that they do not appear to collide with each other but can be reflected by mirrors, so may not be as evenly distributed. After enough time, though, the absorption and re-emission of photons in random directions will still result in an even distribution of energy.

Part of Quantum Theory says that there is a residual amount of kinetic energy left even at absolute zero, and that things will thus still move. Some people think therefore that this Zero-Point Energy (ZPE) should be able to be tapped. There’s some logic there, in that if something is moving then we should be able to make it do some work, but in the case of ZPE I suspect it’s a problem of how we measure things, and that that that energy is not actually available, but is instead imaginary. Just because we measure something to be in a slightly different position does not mean that it’s actually moved, but that since our measurements require us to use some sort of particle to hit the thing we can’t really be totally sure of the measurements. There’s an underlying uncertainty of where the fundamental particle actually is, which is a probability function. ZPE therefore seems to me extremely unlikely to be a source of new energy and thus to break CoE.

Since I’ve also pointed out that work can be subdivided into stored KE, stored PE and displacement, and it should be pretty obvious that a simple displacement is a zero-energy transaction, there is however a chance that we can get the displacement-type work (which is often what we want) for free. That displacement is a zero-energy transaction has been known since Newton, since he said that a body in motion would continue in that motion unless there was a force acting on it. So – if you lift something up you’ve put in work which is stored as gravitational potential energy, and if you let it down again you can get that energy out again as work, but if you move it sideways then no work is done except against friction, and that can be reduced arbitrarily asymptotic to zero. If you watch a pendulum swinging, you see gravity doing work accelerating the pendulum, and then in slowing it again, and yet over a cycle very little work is actually done, and that work is in heating the air as the stored energy in the system dissipates. This is an example of Displacement Work that can easily be analysed as not needing any energy to perform in loss-free conditions. It also highlights the need to look at a complete cycle in a cyclic system, in order that we don’t think work is being produced when it isn’t.

Still, what is work, really? We can define it as a force times a distance, or as a second’s worth of voltage times amps, and the unit of work is the joule in SI units. What is really happening is that our initial store of energy as PE or KE is divided up into an output amount of PE or KE that may be in different locations and may be associated with different objects than it was at the start of the process. What has changed is the configuration of the energy. Some of the initial energy may have gone into a form where we’ve had problems before in re-using it, such as heat from friction that we normally dissipate into the environment in order to stop the device from overheating. KEWork and PEWork are thus no different from KE and PE, and we only give them a different name in order to specify that they are the output side of what we are doing. Displacement work is just a shuffle of the positions of things, and would not be in the definition of work at all except that it’s normally the desired result of using (often stated as expending) energy – we want that sword to be re-shaped in order to plough the land. Language can be a bit difficult, when the roots of it go back so far. Hammering that lump of steel into the right shape will bring on a sweat, and you can’t deny that the blacksmith is working hard, but the end result has no more energy embodied than the initial lump of steel.

In looking at a purported Free Energy (or perpetual-motion) machine we should thus look at the total interplay over a cycle between KE, PE and the three kinds of work (Stored KE, stored PE, displacement). Losses almost always end as heat, which is KE. Count the joules in and out and decide how to assign them in those 5 buckets. All the historical examples I’ve looked at show no extra joules out than was put in, with the illusion being created by hidden power sources if it appears to work at first look. Conservation of mass/energy seems to be absolutely true with no exceptions.

We know that solar panels work well. We also know that if we have any cyclic motion or wave then we can rectify it and get some work done – the energy in that wave gets diverted into a path of our choice and released elsewhere after having done some work. One thing to remember is that the total energy doesn’t change – if we take some in one place it has to turn up in another. All we need to look for is a flow of energy that exists and to divert it so that it does what we want done before we let it go again. Displacement-type work does not take energy to perform, though there is an element of borrowing and returning from the energy bank. Energy is put in to start the motion, then the motion continues until we stop it and get the energy out again. That sword-to-ploughshare change is displacement-type work, in that no excess energy remains after the work has been done. The vast majority of what is normally regarded as work is displacement, where at the end of a cycle there is no work (KEwork or PEwork) actually done. Displacement work does not store any energy, and we should therefore be able to do it without using a movement of energy – all the work we do put in ends up dissipated into the random KE of heat. At least we should be able to do it with a draw on the energy bank followed by a return of that energy.

At this point, we’ve shown that work is done when we harness a movement of energy, normally from one place to another where we can quantify the energy more easily but actually it’s just the movement itself that is important. We’ve shown that energy is not used up by doing work. We have also shown that there are such energy flows even in a thermal equilibrium situation. If we can see an energy flow and have the means to harness it to do work, it doesn’t matter whether the source of that flow is a higher or lower temperature than our equipment – this simply is not relevant. Work cannot be done by something in the past or future, or at a distance, but can only be done by a movement of energy here and now. At its heart, we see that though Displacement Work is simply a different configuration of where mass and energy are, KEWork is the same thing as Kinetic Energy and PEWork is the same thing as Potential Energy. Work (both KEWork and PEWork) is therefore simply a transformation of that mass/energy into another form and must have a distinct place and time for that transformation in order that mass/energy is conserved, whereas Displacement Work does not involve such a transformation and does not require energy to do it. Any flow of energy can be diverted to do work (of any type) for us if we use the right method, and it does not matter where that energy comes from or is going to but simply that the energy flow is available at the place and time where the work is done.

The majority of the “energy sources” and devices we normally use start by releasing PE from a fuel of some sort. This then creates an energy-flow to the environment that is then utilised to perform the work we want done, using a heat-engine that requires two heatsinks to function. Renewables utilise an existent flow of energy in the natural world, and divert this energy for our use without needing us to burn fuel to create the energy-flow. I have shown here that there is in fact an energy-flow available even in a system in thermal equilibrium, that we haven’t noticed because the net flow is zero. The 2LoT tells us that we can’t utilise this energy flow because the net flow is zero, yet we already have devices that can utilise this flow and produce electricity from it. We just need to recognise what is happening and improve the delivery of power from real devices.

The stage is set for the logic showing the loophole in 2LoT, and therefore why Eddington’s pronouncement from 90 years ago is demonstrably incorrect.

The logical case for Perpetual Motion

We now need to build the logical case for how to bypass 2LoT and to get usable work from ambient temperatures. This is set up as numbered assumptions and observations, which all need to be correct in order that the deductions in the conclusions are true.

Assumptions:
1: E=mc² applies, and energy and mass are thus different forms of the same stuff.

2: Conservation of Mass/Energy universally applies, so the amount of mass/energy we have is constant. This also implies that ZPE is not available, and that we can only juggle with the energy that is available. No Free Energy, in other words.

3: Conservation of Momentum universally applies. Even if we can’t see it, every action has an equal and opposite reaction. This is important when considering heat conduction. This may be not totally true at very low accelerations where momentum may be effectively quantised, but for normal situations the deviations are calculated to be too small to be measurable. See http://physicsfromtheedge.blogspot.fr/ for a mind-blowing theory that may well be truth.

4: Work is not a conserved quantity. This can be seen in action in any office….

5: There is a hidden assumption of causality, in that the cause always happens before the effect. Although in quantum theory this may be slightly violated, the timescales are very short and related to the uncertainty in measuring exactly where something is when something happens. Such violations will not affect the somewhat larger systems we need for a real-life system.

Observations:
1: Kinetic energy is energy of motion. Heat is simply random directionality KE. It will spread out to fill the available volume and tend towards an even distribution. We cannot totally confine it or stop it from heading towards that even distribution (though we can slow it down). A system of bodies at varying temperatures will change in the direction of having everything at the same temperature in thermal equilibrium. When such a set of bodies is in thermal equilibrium, each must be receiving exactly as much energy as it is putting out as radiation and conduction. The Stefan-Bolzmann law tells us that each body will be radiating energy no matter what the environmental temperature is.

2: Potential energy is stored as mass, and will tend toward a local minimum with a release of kinetic energy.

3: Matter is particulate – there is a scale at which this must be taken into account in the calculations. At a large enough scale (practically all mechanical devices we can make in the workshop with manual tools) we can consider matter as continuous because the numbers of atoms are so large that statistical calculations are closer to exact than we can physically measure. At a small-enough scale we are instead dealing with individual energy transactions and statistical mathematics no longer applies (and will give the wrong answers). At atomic scales, Newton’s laws will apply and within our frame of reference a slower and less-energetic particle can lose energy to a more-energetic particle. You can set up a demonstration of this on a snooker-table, with a fast ball down the length and a slow ball from the side. If you get the timing right, the slow ball will stop and the fast ball will gain speed as it is deflected.

4: Thermodynamics regards heat as a fluid that only flows from a hotter body to a colder body. This is indeed what we measure to happen, but it is not a correct model. This is a basic flaw in the derivation of the theory of thermodynamics, and is shown to be wrong by the Stefan-Boltzmann rules for radiation which show that every body above absolute zero will radiate according to the fourth power of its temperature. Two bodies in thermal equilibrium with each other must be radiating energy, and must therefore each be receiving as much energy as they are radiating. Radiated heat thus is always bidirectional (in fact omnidirectional) in its nature.

5: Conservation of Momentum also implies that conducted heat is also bidirectional in its nature. Heat will only pass in one direction when one of the bodies is at absolute zero temperature, and real matter will not achieve absolute zero. I can’t however think of a way to use the logical energy flows in heat conduction to produce a usable unidirectional energy flow external to the body. CoM thus makes macroscopic mechanical gas-based engines (and similar ideas) subject to 2LoT, unable to exceed Carnot efficiency.

6: Work can be performed only by harnessing the movement of energy from one place to another. Energy may however move without performing work. The amount of energy that moves defines the maximum amount of work that may be performed. If 1 joule of energy is moving from one location to another, then up to 1 joule of work may be done between those two locations. To get continuous work done, you need a continuous flow of energy. Classically, to produce a local concentration of energy, you need to either burn fuel which releases mass as energy, use an available energy-flow such as solar power to collect energy, or do work on the energy that is there to push it where you need it. 1 joule of work will produce at maximum 1 joule of local excess energy (from which you can at maximum extract 1 joule of work subsequently). This is why a perpetual motion machine is considered impossible. Kinetic energy of heat naturally disperses rather than concentrates itself, so the net heat flow will not naturally move from a colder to a hotter body, although we have shown already that there is an energy flow in both directions.

7: Thermodynamics does not apply to electricity, but only to the heat that is transferred. Converting the kinetic energy of heat (either as photons or as velocity of molecules) into electricity removes it from the thermodynamic equations. This is maybe the most contentious observation, but a photon incident on a photoelectric layer in a photovoltaic cell (PV) is converted 100% to electricity since energy is conserved (though note that excess photon energy over the PV voltage is normally converted to heat in the PV). The percentage of conversions is a statistical problem, but if a photon is converted then all of its energy becomes electrical – we’re dealing with a single energy transaction at a time, even though we’re also dealing with a large number of them. If that harvested energy is transferred down the wires to a load, then by Conservation of Energy it must be removed from the thermodynamic equations of the PV cell. Where heat-energy is converted into electrical energy, and that electrical energy is taken down some wire to somewhere else, that heat-energy disappears from the system which thus becomes colder than it was.

Similarly, even though it’s not quantised as far as I know, the kinetic energy of a molecule may be converted partially to electricity by a piezo crystal or a small-enough mechanical conversion, and the electricity thus produced may be used elsewhere. From an energy viewpoint, we are simply diverting some of that incident energy along a different path. This mechanical KE conversion system cannot be 100% effective since the incident molecule needs to be cleared from the receptor area – if you stop it dead it’s going to just sit there in the way. Once that bit of random heat energy is converted to electrical energy, which is not random but has a direction (so statistical mechanics no longer applies), we can direct it to wherever we want using wires and can do work with it, provided we take it away immediately so that the reverse reaction cannot occur. For most work, that directed electrical energy will get turned back into random heat energy again, but it may be stored in a battery or in lifting weights, compressing springs etc.. After a while, we’ll use that stored energy and it will again probably end up as heat in the environment.

8: The natural state for any body is to be at absolute zero. If it has more energy than this it will either radiate it away or conduct it away because such random kinetic energy will spread out. The only thing that stops the body reaching absolute zero is the inflow of heat from other bodies that are also getting rid of their heat at the rate appropriate to their temperature and composition. This is maybe the most startling observation, but can be seen to be true and that the Stefan-Boltzmann relationship states that this will happen (and also gives you the ability to say exactly at what rate). It should take no work to allow a body to cool down, but should deliver work when doing so. The work we put into refrigeration is effectively wasted.

9: When we convert the KE of a photon or molecule into electricity, we take that energy out of the local environment and move it along wires to somewhere else where we can use it to do work. In this case, the PV or other device must appear to be colder than the environment since the heat or energy that goes in is not re-radiated from the local space but reappears elsewhere. Like a cold body, a larger amount of heat goes into the device than comes out of it as heat, with the balance as electricity that is used elsewhere. This cooling effect is of course measurable. The reason that PVs and nantenna arrays have not in general been tested for this cooling effect is that it is obvious that it should happen from Conservation of Energy, yet nevertheless it hasn’t been remarked upon that this is against 2LoT. Maybe that quote from Eddington has something to do with this omission.

Assume we have a PV cell in the sunlight, and that we are not drawing any current from it. Assume the air temperature is 20°C and that we measure the PV temperature as 40°C, and that we have incident sunlight at 1kW/m² and that the PV is 20% efficient (so outputs 200W/m²). If we start to draw that 200W/m² from the PV, then its temperature will drop by 4°C to 36°C. This is an easy test of the cooling-power of a PV. Of course, I’ve made some approximations here and you won’t get exactly these figures, but it helps to put a number on things. Here, the PV is above the air temperature because it cannot convert all the LWIR it receives, so that is converted to heat in the PV as is the balance of the photon energy above the bandgap. If we used a different band-gap voltage for the PV then the balance will change.

If your logical box extends to enclose both the source of the energy and where it is used, then 2LoT is satisfied, but if you only consider either the location of the conversion or the location of the use of that energy you instead will see energy disappearing from the system or appearing where the work is done. The wires produce a physical distance between the cause and the effect, and of course storage of that energy can put a distance in time between generation and use of the energy. Overall, the laws are still applicable if we make the logical box big enough in time and space (so this isn’t really breaking 2LoT) but locally we have produced useful work without needing any fuel.

Conclusions:

1: For macroscopic mechanical devices, whose working dimensions are large relative to either the atoms of which they are made or to the working fluid used, the statistics of random collisions mean that the net energy flow will always be from the hotter to the cooler and thus that 2LoT will apply. We thus can’t get any Free Work out of them, and real machines will only do a bit less work than we put in to start them because there are always losses. This eliminates most of the traditional Free Energy ideas (and the majority of new designs too) as viable ways of doing the job of giving us Free Work.

2: If we utilise the quantum properties of photons, where if an interaction happens (photoelectric effect or antenna) then the photon energy is converted totally or partially to electrical energy, then we can convert heat or light directly to electricity. We do not need two heat-sinks for this, since a body naturally radiates and so we simply intercept this flow. An example from real life is a solar panel. These can be driven from LEDs (at any physical temperature, and that can be below the temperature of the PV) as well as sunlight, and then it should be obvious that this is not a process that is covered by either thermodynamics in general or 2LoT in particular. It is a quantum process. It takes in random-direction KE and outputs unidirectional KE.

We can also use a rectenna to convert the THz region of the IR spectrum into available electricity, using the wave-nature of photons. These have been manufactured and tested. Although when we leave the nantenna open-circuit it will be in thermal equilibrium with its environment, once we start to draw current it will appear colder than the environment. Heat from the environment will thus flow in, and flow out again as electricity to drive a load. When used in the load, it will immediately or eventually be returned to the same environment it came from, to be harvested again by the nantenna. The same energy is recycled again and again to give continuous work output as long as the devices used do not wear out. This is a Perpetual Motion system of the second kind.

3: We should be able to make a mechanical device that similarly converts gas-pressure to electricity if we make its scale comparable to the mean free path of the gas we are using. For air at STP this means around 0.07 micron in size and capable of resolving impacts in excess of 7GHz. A section of piezo-electric material could be used, since a diaphragm and coil arrangement would require very small feature-sizes. A MIM diode could be fabricated to rectify the signal produced. Difficult but not impossible using modern fabrication techniques, and we can always change the gas used to use a somewhat larger feature size. Each collision of a gas molecule would result in a small electric charge being produced and taken out along the wires. We can reasonably expect to harvest in excess of 1kW/m² from such a device using atmospheric pressure and temperature (ambient energy), given that around 10 times this is available. Note that we calculate the energy in a volume of gas as PV (pressure times volume), and that that energy is actually available to use to do work, with the right conversion. Changing the scale of the machine to somewhere comparable to the mean free path of the gas makes this possible. There is currently no extant device that will do this, but there is no good reason why this cannot be built if desired or that, once made, that it would not work as specified. Again, this would allow the same energy to be recycled, and on each pass through the system that energy would do more work. This is thus also a Perpetual Motion device of the second kind.

4: There will be other methods possible that I either haven’t thought of or haven’t mentioned. In order to get continuous work out, we need to have a continuous loop of energy. Here, I have concentrated on the simple methods where the reason for the energy-loop is obvious, and where the devices have either been built and tested or where the technology is available to make them. The losses in a normal machine are actually simply changing unidirectional KE to omnidirectional KE (heat) that simple classical machines can no longer use. The energy is however still there, and we need to simply change the omnidirectional energy back into unidirectional energy. Any device that does that will give us an effective perpetual motion system, since we can’t destroy the energy itself.

5: For those who look to the far-distant future where the whole universe is in thermal equilibrium, it should be noted that that would not stop us either collecting energy together or doing work, though of course it would be a bit more difficult than today. Where there is an energy-flow, we can still harvest work from it, and at anywhere above absolute zero there is always an energy flow.

To restate the conclusion in a different way:

We are surrounded by a vast amount of kinetic energy that is moving in random directions such that the vector sum is substantially zero. If we use either the photoelectric effect, or a mechanical engine on a small-enough scale, to convert the random directions into a unidirectional energy flow along a path we choose, then we can perform continuous work using that flow without needing to burn fuel of any sort. Since this principle of Perpetual Motion has been demonstrated with a nantenna array, it is not impossible as has been held for centuries. Looping the available energy in order to achieve continuous work is in fact the ultimate renewable power source, and may be used anywhere on the globe. At this point in time the available methods of tapping this resource are low-power, but I am working with friends to improve the current levels of a few mW to designs that can potentially supply in the kW range.

We have been taught that only a linear movement of energy can be used to do work, and that thus the amount of work that can be done is limited by the excess energy available. In fact, it is possible to generate a loop of energy and then the amount of work that can be done is unlimited, except in so far as the energy is stored as KEWork and PEWork. Work itself is only a different configuration of energy in a system, and if we can direct the paths along which the available energy is allowed to move then we can do any work we want to without needing to add extra energy into the system. The most convenient way of directing energy is as electricity, which we can store and release as desired and which can be moved along wires from the source to the destination.

Practical devices

The obvious practical device is a rectenna that is tuned to the IR frequency required. These are very small, so are normally referred to nantennae. A reference on how these are manufactured is in the appendix, so there’s no need to explain manufacture here but simply what they are.

A nantenna is an antenna tuned to the IR band of interest (around 10 micron wavelength for room-temperature) and combined with a MIM diode to rectify it. These nantennae are fabricated in an array with connecting wires to deliver the electricity generated. With only a few watts/m² available within the bandwidth of the nantenna array, the available power out is not much – refer to the appendix for measurements.

If we get power out of the nantenna array, it follows that heat is being taken out of the environment (by Conservation of Energy) and that the nantenna array will be colder than the local environment. While we take power, the nantenna array is a cold-sink relative to the environment. Heat goes in to the nantenna array and comes out as electricity. All the heat that is thus transformed (100%) will end up as electricity in the wires, since energy is conserved. In contrast, if you look at a Peltier block or other thermoelectric generator, you’ll see that it needs to have one face hotter and the other face colder, and that the electricity is here produced as a consequence of the movement of net heat from hotter to colder. The Peltier block is thus limited by the Carnot limit and 2LoT, and so far they are way off that efficiency anyway. The nantenna, as we’ve seen, needs only one radiator of EM energy in the waveband it is tuned to, and though it also won’t translate all incident photons to electricity (thus won’t reach 100% efficiency in real terms), the efficiency is not limited except by how good the design is.

The energy-loop can be diagrammed thus:

Environmental heat => IR radiation => nantenna array => electricity => work in load => heat energy => environmental heat

The “work in load” step may be modified by storage of the electrical power in a battery, or the work may involve lifting something against gravity, but after some time the stored energy will be released back into the environment unless we send it off the Earth, for example as a laser beam. The load here is anything we want to do; it could be a simple resistor that heats up, it could be the motor in your electric car, or your mobile phone – anything that uses electrical power. Exceptions are easy to put forward, but each step is allowed in both theory and practice, and for most work we want to do the diagram is what would happen.

If we put the nantenna array in a closed insulated container, and take the electrical output to a resistor or other load such as a motor or lamp, then we will see that the temperature of the container will drop at a rate equivalent to the power delivered to the load. This is precisely what 2LoT says cannot happen, since the only results are that the container cools down and work is done outside it.

If you enclose the resistor or load in a second insulated container, then that container will heat up as the first cools down, and after enough time you will be able to run a heat engine between the two containers. Interestingly, if we regard the two containers as an isolated system, then the total entropy of that system will also decrease as the temperature-difference between the containers increases.

If we leave the nantenna in an open environment, and connect the wires at some large distance to a motor that does some work such as lifting a weight or pumping water, then that work will continue to be done as long as the motor doesn’t wear out, or the nantenna does not corrode or otherwise fail. In a practical sense, this is perpetual motion. The energy comes from the environment, is converted to electricity and then work, and at some point will return to the environment as heat which can then rejoin the cycle. If you want a shorter cycle, then put a fan on the motor and enclose it in the insulated container, when it can be seen that the available energy is simply cycled between heat energy and electrical energy – work is done but the container neither heats up nor cools down. The fan will continue to run until it wears out.

It should be obvious that however you state 2LoT, this system breaks it. It follows from this that 2LoT has a loophole that we have the means to exploit, and that a practical perpetual motion device that also delivers power to the user (the nantenna array) is in fact possible and has been built and tested. Practical details of manufacturing and testing are in the appendix. The key point in breaking 2LoT is the conversion of energy in random directions to energy with a single direction, which requires engineering on a microscopic scale.

A second type of device is also obvious, in that we need to make a PV using a semiconductor with a bandgap of around the same energy as the thermal energy at room temperature. Somewhere in the band 10-100meV would function, and the standard methods of making a PV should be applicable here as well. It is somewhat difficult to calculate beforehand what the actual output of this would be, since there is some confusion over the emission of IR wavelengths within a lattice – we only measure the emissions outside the lattice. It seems reasonable to suppose that IR is transmitted and received within the lattice in very near-field conditions, and so the actual emission rate may be a lot higher than expected. We will find this out by experiment. For far-field, though, a bandgap of 25meV will see around 138mW/m² available, and may convert 20% of that into electricity. This device is planned to be made during 2017, and the data it produces will be used in the next design.

A third type of device has been mentioned earlier, in that a piezo-electric device which is smaller than the mean-free-path of the gas it is immersed in will be able to resolve each individual collision of a gas molecule. For air at STP this is around 70nm diameter, but we could use a heavier noble gas such as Argon, Krypton or Xenon or alternatively Sulphur Hexafluoride in order to be able to use a larger diameter, or alternatively we can run the gas at a lower pressure. Electrodes on the piezo would take the signal to a fast MIM diode (or equivalent) for rectification, and so for each collision we’d get a DC electrical pulse that will add to the output from an array of similar devices. It is reasonable to expect somewhere around 1kW/m² from such a device. Gas molecules hitting the piezo will rebound with a lower velocity and we’d measure that as being cooler. If we are using other than air for the device, or other than atmospheric pressure, we’d need a hermetic seal to hold the working gas in and a heat-exchanger system to keep the working gas at near-ambient temperatures. Since I published this idea here in 2013, it’s not going to be patentable so I don’t expect it will get built until we have enough money to do it ourselves.

A large amount of power is used in cooling. If we use one of these devices for cooling, we actually get power out instead. Your refrigerator now becomes a power source in your house, rather than a power drain, though of course the energy it delivers comes from the heat in the food so will be intermittent. Producing liquid air products will also produce power rather than use it. This may cause some big changes in the ways we use “energy” since what we are really using is work, and work is free providing you can loop energy. It may take a while for the language to catch up, though.

Conclusion

It has been shown that for the heat-engines we know the 2LoT describes the limitations very well. While we have feature-sizes of the devices large relative to atomic size, we will not be able to make a device that will break 2LoT. On a macroscopic scale, heat will always flow from a hotter body to a cooler body, and we will be limited to devices that require a hotter side and a colder side to perform work by the controlled transfer of heat from the hotter to the colder heat-sink. The energy flows in a linear fashion from hot to cold, and will not go in the reverse direction without us needing to put work in. If we have a certain amount of excess energy in a location relative to another location or the environment, we can do only up to that amount of work by allowing that movement, but no more.

If however we use either the quantum nature of photons or the particulate nature of matter, and change the scale of our manufacture to suit, we can instead use a single heat-sink (or in fact gas pressure rather than gas-flow) to perform work, and instead of the energy flowing in a linear fashion it will be looped on itself, in a logical way if not actual. We convert the environmental random-directional KE to unidirectional KE to do work, and normally the work results in random-directional KE again. The same quantity of energy can give us a continuous delivery of work for as long as the device lasts, as long as that output energy is not stored. Meantime, the energy store in the environment is vast and is replenished by the Sun, so once we can use this we won’t run out of available energy. I have shown that practically a nantenna array will do this, and that they have been made and tested, and I have also given two other examples of designs that should do the same job but have not yet been tested. Though the currently-available devices only deliver a small amount of power, once the possibility of them is realised we should soon get much-improved power delivery. While they are regarded as impossible and as fraudulent if they are claimed, few will invest money into the development.

If we develop this as a technology, then we will no longer need to burn fuel to provide power. This is the ultimate carbon-free power, in that once the device is made it will continue to deliver power indefinitely. In hot climates, it can be used as air-cooling and will deliver power in the process. In cold climates, it can move the available energy from outside the homes to the inside and thus keep people warm without needing any fuel to do so. Energy is conserved, but work is not.

The central point is in understanding how and why energy moves the way it does. For heat energy, it is a random walk and so after enough time it will have equal probability of being everywhere accessible to it. Even when it is at this even concentration, though, there is still a measurable flow of energy, and all we need to do work is a flow of energy. At thermal equilibrium, the flows are just equal and opposite, and so the simple devices we’ve made for a net positive flow won’t work. We need instead devices that allow the heat energy to loop and thus deliver continuous work output. These devices already exist, but only deliver a few microwatts to milliwatts (if a large device) and are not yet useful.

Widespread use of such devices at the kW scale and above will take heat out of the local environment, but this will also be returned to the same environment after maybe a delay through accumulation in batteries to deal with peak loads. The net result will be zero, unlike burning fuel which returns energy to the environment that may have been stored millions of years ago. Since taking a large amount of heat from the environment will get more difficult the larger the amount, because of icing-up and suchlike, I do not expect these devices will be used for power-levels in the many tens of kW, though it’s possible that in a car or truck there will be enough airflow to sustain enough heat-transfer to actually run the vehicle or at least a substantial percentage of the load. We will still need other power-sources, but not to the same extent as today. It’s likely that, with the backbone of continually available (though variable with temperature) power that the other renewable power sources may be able to supply the rest of the power needs of our society. If these devices are used in conjunction with some heat-source, they will be almost 100% efficient in converting heat to electricity, unlike other heat engines. The only losses will be in the wiring resistance, thus losing some heat outside the system, since energy is conserved.

Since Perpetual Motion machines are in fact possible, and have been made and shown to work, we should direct resources into research into how to deliver a higher power-level. We do not need to have two heatsinks in order to convert heat to electricity, but we have the technology now to do this using a single heatsink (though currently at a low power). We need to improve the technology, and this does not need new physics but simply another look at what we already know.

Glossary

2LoT – Second Law of Thermodynamics

CoE – Conservation of energy

CoM – Conservation of momentum

Heat engine – a device that utilises the difference of temperature of an energy-source and an energy- sink to produce output work. The thermodynamic efficiency is limited to the Carnot efficiency, but since an optimal zero-loss Carnot engine will in fact convert the energy difference totally into work, that Carnot limit of thermodynamic efficiency is simply equivalent to CoE. You don’t get more energy out than you put in.

IR – Infra-red light, radiated by all bodies above absolute zero temperature.

KE – Kinetic energy

MIM – Metal-Insulator-Metal diode or tunnelling-device

PE – Potential energy

Perpetual Motion of the second type – where a device uses no fuel or other obvious source of power yet runs itself and a further load as well. Long considered impossible, but here shown to be not only possible but manufactured and proved to work..

PV – Photovoltaic cell, which produces electrical power from incident visible and IR light.

Stefan-Boltzmann law – this states that the radiation from a hot body is proportional to the fourth power of the absolute temperature. See http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/stefan.html for details.

 

 

 

STP – Standard temperature and pressure of 0°C and 100kPa

ZPE – Zero-point energy of a particle, which is the energy that it retains in its ground-state at absolute zero temperature. This may be only theoretical, and not practically available. In any case, to date no-one has managed to extract it definitively.

Appendix

1: A dissertation on how to make and use a nantenna array

ABSTRACT

Title of dissertation:

IR DETECTION AND ENERGY HARVESTING USING ANTENNA COUPLED MIM TUNNEL DIODES

Filiz Yesilkoy, PhD Dissertation, 2012

Dissertation directed by: Professor Martin Peckerar , Department of Electrical and Computer Engineering

The infrared (IR) spectrum lies between the microwave and optical frequency ranges, which are well suited for communication and energy harvesting purposes, respectively. The long wavelength IR (LWIR) spectrum, corresponding to wavelengths from 8μm to 15μm, includes the thermal radiation emitted by objects at room temperature and the Earth’s terrestrial radiation. Therefore, LWIR detectors are very appealing for thermal imaging purposes. Thermal detectors developed so far either demand cryogenic operation for fast detection, or they rely on the accumulation of thermal energy in their mass and subsequent measurable changes in material properties. Therefore, they are relatively slow. Quantum detectors, allow for tunable and instantaneous detection but are expensive and require complex processes for fabrication. Bolometer detectors are simple and cheap but do not allow for tunability or for rapid detection.

Harvesting the LWIR radiation energy sourced by the Earth’s heating/cooling cycle is very important for the development of mobile energy resources. While speed is not as significant an issue here, conversion efficiency is an eminent problem for cheap, large area energy transduction. This dissertation addresses the development of tunable, fast, and low cost wave detectors that can operate at room temperature and, when produced in large array format, can harvest Earth’s terrestrial radiation energy.

You can download the full dissertation at:

http://drum.lib.umd.edu/bitstream/handle/1903/13528/Yesilkoy_umd_0117E_13795.pdf

A search will bring up a lot of similar devices, since the principle of operation is well-known.

List of relevant articles on R-G

http://revolution-green.com/free-energy-by-simon-derricutt/ The first article, that introduced the piezoelectric idea to get Free Work from atmospheric pressure. About 4 years ago, but the ideas predate that by decades.

http://revolution-green.com/robert-murray-smith_ambient-energy/ RMS’s replication of the Lovell device, which shows that it’s not that difficult to make a device that harvests ambient energy from room-temperature, and uses only a single heat-sink.

http://revolution-green.com/paradigm-change/ gives some details of Dan Sheehan’s work. This has gone dark since, so I don’t know what progress he’s made.

http://revolution-green.com/thoughts-proell-effect-similar-ideas/ which explores Ken Rauen’s ideas and his intended method of beating 2LoT. At the time, I said it wouldn’t work based on Conservation of Momentum considerations, but now I would say he doesn’t loop the energy. He did however point me at MerCaT devices, which are PVs that work at IR energies and are commercially available. Devices that loop energy exist, but the output is somewhat too low to be useful. Ken came close to seeing that an energy-loop was possible but chose the wrong way to implement it.

http://revolution-green.com/simon-derricut-free-energy-philosophy/ is where Mark took a comment and turned it into an article. Just about the start of seeing Free Work as a concept.

http://revolution-green.com/some-philosophy-on-work-and-how-we-might-get-it-for-free/ is where the Free Work idea was really put forward, and also the invention of 3 different kinds of work so we could better analyse where the energy was flowing and what was being done.

http://revolution-green.com/infrared-pv-gives-power-at-night-too/ shows a nantenna array and what it can do. Again, this is a real device that loops energy, and is a perpetual motion device. Since the reasons for it working are well-known in theory, and it’s only the manufacture that is a bit expensive and difficult, no-one realised that it is disallowed from working by thermodynamics theory. Luckily, being against theory is not a bar to something actually working experimentally.

http://revolution-green.com/some-energy-basics/ explains how energy moves, and in the comments there’s the first set of points (where all must be correct in order to achieve the conclusions) that specifies how to actually make a Free Work system. One needed a bit of expansion to show why it was logical to remove the electricity generated by a PV or nantenna from the heat equations.

http://revolution-green.com/new-year-simon/ was where the underlying error in the formulation of the 2LoT was exposed, in that the heat was modelled as a fluid that would only flow from hotter to colder. Here I show that it flows in all directions possible, and that just because we normally measure the net flow doesn’t mean that that is the reality. Again, Mark took a comment and made an article of it.

Most of these articles have further explanations in the comments section. Sometimes it takes many words to transfer an idea, and there are certainly a lot of words here. 

Damn! The MiB just turned up!
-----


February 11, 2017 (slightly edited by Mark Dansie)

DEFINING FREE ENERGY 

Down at the philosophical base, it’s not actually Free Energy that we want. We want to get work done for free instead. Work can only be done when mass/energy moves from one configuration to another. After the work is done we have just the same amount of mass/energy as we started with, just in a different configuration.

If we run a cyclical process, then at the same point in the cycle the mass/energy will be in a certain configuration and thus we need a way to persuade the energy in our system to return to an imbalance so that we can again do work with it. When we look at a Free Energy device, therefore, we need to look at how that imbalance is restored since this does not naturally happen.

If you have a local concentration of energy, then it will naturally spread out in the space available until it is of uniform density and no more work can be extracted. To compress it into a small volume again, we need to do work. Having confined it by doing work, we can then get that work out again by letting it naturally expand. It’s that re-compressing of the energy density that doesn’t naturally happen, and without that you can’t continually extract work from the same amount of energy.

This is maybe a difficult concept – it took me a while to see it clearly. Maybe a bit more talking around it might be useful…. Work and energy are not the same thing, though they have the same dimensions and the words are largely used interchangeably.

When we burn a fuel, we release the energy in it that is stored as mass and we produce a local concentration of energy. For example burning diesel in a motor. We can then allow that local concentration to dissipate to the environment and do some work in the process. The more concentrated that energy is when we start extracting work (so – the hotter it is) the more work we can do with the same amount of energy – the possible work that that energy release can do cannot all be transformed into work done. That’s Carnot’s law in another guise.

A machine therefore needs to have an identifiable energy flow that it can harness to do work. Normally we burn a fuel to create that energy flow, but of course there are natural energy-flows as well that are driven by the Sun one way or another. If you can’t see that energy-flow, it won’t work. That’s about as simple as it gets.

That endless stream of energy from the Sun can drive a lot of things, and if we learned to use all of it we could do about 100 times the total work done in the world. It gives rise to HEP, wind, solar power, wave power etc.. The changing gravity caused by the Sun and Moon give rise to tides, and again we can get power from that that was stored in the moving masses.

If ZPE worked, then I’d expect to see some clues somewhere that energy was being created somewhere. If it can happen, then it will and we should be able to see it. We don’t even see anything in cosmology that could only be explained by ZPE, so it’s most likely that it’s not a real source of new energy. I’d still back people honestly researching it, but I see it as a dead end.

THE FLOW OF ENERGY

There’s a large flux of neutrinos that could be an energy flow that could be tapped, except that they interact so little with the matter we are made of that it’s not a practical source of Free Work. Unless people can give a good reason why their layered (normal) materials should stop such subatomic particles, therefore, such machines are science fiction. Yep, most likely a scam, too….

In the end, therefore, we need to look for the flow of energy that is being harnessed, and why that energy is flowing. If we’re burning something (coal, oil, Thorium, Palladium, Nickel…) then we’re producing the energy flow, and that won’t be Free Work but could be pretty cheap if we get it right. Otherwise we need to find an energy flow that naturally exists (solar, wind, tides, geothermal…) and harness it to do work (and again the idea is to do it cheaply). I’m open to other ideas, but I haven’t seen one that hangs together and can be proved.

Anyone who can control the flow of something essential to life (such as water, food or energy) can control others – that’s Hydraulic Despotism. We see it in action at the moment. Working on ideas that pass such control to each individual so that he/she can’t be dictated to is definitely a good thing. Given Free Work (or at least very cheap work) we can extract water from the air and use indoor farming to produce food nearly everywhere on the planet, and thus giving people their own power source rather than relying on the grid can enable all the rest. At this moment in time, solar (and possibly wind) power can do this but there are problems with making this constantly available 24/7/365.

Boiled down, though, when you’re looking at the various inventions that purport to give Free Energy you should look to see what energy-flow is being harnessed. If there’s no such flow, it won’t work.

DEBUNKER BY DEFAULT

Mark Dansie started off trying to also subject the various Free Energy claims to such a scientific verification since he wanted to see one that really worked. What he found was bad measurements and a fair amount of fraud, and in the meantime gained a reputation as a debunker. In the nature of things, you will no doubt be similarly castigated for proving that most if not all of the projects you test don’t actually work as stated. This is because most of them rely on a principle that has been shown to not work many times. To get any of those principles to work they will need to be changed and to add something new that no-one has done before.
-----


November 14, 2015

This post was sparked by Dendric who has been arguing for various ideas as regards Free Energy, and maybe feels annoyed that we aren’t accepting the new theories that explain the way the universe works in a different way to the standard. Dendric has put up his own website with pointers to various theories and ideas at http://ufsolution.wix.com/unifiedfieldsolution  for people to look at.

My problem is not with new ideas. When I was learning physics a long time ago I had to learn quite a few new ideas, some of which were mutually exclusive – they couldn’t both be correct but we had to accept them as such because they gave the right predictions as to what we actually measure when we do the experiments. Part of the drive for a Theory Of Everything is to try to get a mutually-consistent set of theories that don’t have those ugly mismatches at the edges. So far, no-one has succeeded is making such a theory that is understandable to most people. Although String Theory is supposed to get pretty close to being consistent, it’s said that there only half a dozen people in the world that actually understand it (and I’m not one of them). Still, as a mainstream theory, with 11 dimensions and 8 of them curled up so we can’t see them it certainly takes a lot of swallowing. Given that, the Aether theories look pretty tame by comparison and thus easy to believe that they might be right.

The problem, though, is when you come to predicting what we’ll measure in any particular experiment. If you predict that you will be able to utilise ZPE, then when you build a device that is designed to do just that then we should measure more energy out than in. So far, no-one has shown that actually working, so there must be a problem with the theory. We still have the logical problem of not being able to prove a negative, so although a failure to demonstrate ZPE is not in itself a demonstration that it is impossible, when enough people have tried and failed you should have a pretty good idea that the principle is wrong.

The common problem with all the OU devices either currently advertised or in the archives is that we can’t actually test one and show that it works. There is no Searl Generator that works. The one that was claimed to work flew away. Tom Beardon’s MEG has no extant working version. Bill Alek’s OU Auroratek transformer can’t be shown to work for longer than it takes the battery to run down, and the same applies to the battery-powered GAIA bubble-machine. There’s no working Perendev magnetic motor, there’s no working Keely motor, there’s no working John Rohner motor, there’s no…. Yep, it gets boring after a while, since no matter how clever the theory is there isn’t an actual working device you can go see and measure the output. Yep, you can point an FLIR camera at things and see where the heat is (and isn’t) but there is nothing you can take home with you and plug in a real load and it will just work. I have no doubt that my emergency generator works; I fill it with fuel and oil and pull the cord, after which it will give me a couple of kW until the fuel runs out. I have no doubt a UPS would keep my computer running – until the battery runs out. The point about an OU device (or other exotic energy ideas) is that I would get the same result without needing to either refill the fuel or recharge the battery. If I had to refill the fuel can, water would be a nice thing to use instead of an oil derivative, being a whole lot cheaper.

A public demonstration of an OU machine is quite acceptable to the authorities. Rosch did a pretty-public demo and convinced quite a few people, and there are a lot of examples from history where various OU devices have been openly advertised and demonstrated. Rossi has done a few too…. If the demo is well-staged, then a fair number of people have left convinced they’ve seen OU or exotic energy in action. Getting one to take home and use is another story though. You can order one and pay for it, but somehow the delivery-date keeps getting extended until it’s obvious to a blind man it isn’t going to happen. GDS are playing that game at the moment, hoping for enough people to pay before the game is up and Greg Potter has to depart to another country that hasn’t an extradition treaty. Should we give him a bit more time to deliver and avoid telling the world it’s a scam in case it isn’t? Do we need to keep such an open mind that we forgive the slight lapse of never showing it working over the last 4 years or so (that we know of) whilst still offering it for sale?

A freshly-minted science graduate will have a known set of things that are true and a known set that aren’t. With experience, though, and some time actually working at new ideas, that graduate should come to the realisation that our theories are only the best we know so far and that even well-respected ones can be overturned and can be shown to have a limited range of validity. They are fine for the normal situation, but if you go outside the known limits then you need to apply some more-complex maths or something totally new happens. However, what the theories really do for us is to predict what will happen when we do something, and what we will measure as a result.

What I ask for Dendric’s theories (or at least the ones he espouses) is a set of predictions. The predictions will be of the form: if you design something this way, and build a device to these specifications, you will measure this result. That result, of course, needs to be different to what the standard theories would predict for the same configuration and actions. I want a degree of specificity. Can we get antigravity or at least something that gives us a thrust without needing reaction-mass being ejected? Can we get an endless stream of work out of a device without needing to refuel? Can we get a miniature nuclear power reactor that is safe to use? Add in your own wish-list here….

In the absence of such specific results, it’s fine to discuss the ideas and think about a different explanation of the way the universe works but it should be realised that until there is some concrete testable result attainable then it can’t be proven. To show that the ideas have some validity, we’ll need to actually build something and get the measurements. Until that data is in and can be validated by other people, we might as well stick to the theories that work well-enough.

I can claim I have a unicorn in my field below the vines. I can produce a photo of her if needed (but note that the horn doesn’t show up on a digital camera and can only be seen by virgins anyway). Difficult to get a photo anyway, but I managed it. In order to prove that RĂ©glisse isn’t a unicorn you have to come visit with a team of unicorn experts (and they must be all virgins) to test out my theory. I expect I can find reasons to dismiss your dismissal of my claims, by calling the sexual purity of your testers into question, so apart from the cost of the whole exercise you can’t stop me claiming that RĂ©glisse actually exists and is a unicorn. Yep, gets a bit silly, doesn’t it? A logical response, since I’m doing the claiming, is to ask me to prove I’m correct and to assume I’m either mistaken or lying to get publicity and/or money. In the same way, someone with a claim for some new physics has to show me that it’s valid. It’s not up to me to prove that they are wrong, but it’s up to them to prove they are right or at least show that the claim is reasonable. A really good way to show that the claim is valid is to build a real device and to show that it works as specified.

In the end, it all boils down to Mark’s catchphrase of “Show me the data!”. Without the data from a working device, any theory, no mater how attractive, is just another theory among many. Prove it.
-----


October 13, 2915

ENERGY AND WORK

At the basic level, all energy (and work) is free. It costs us effort to mine it or harvest it, or we have to pay someone else to do that job for us. Although a magnet motor would be pretty nice if it worked, there’s still a cost involved in manufacture and we’d look at that cost relative to what the other methods cost us. The breakthrough meme is thus a little fuzzy, therefore, since what we’re really looking for is a method of powering our lives that is cheaper.

TESTING FOR OVERUNITY

Any such method that was real would be very easy to demonstrate to the satisfaction of any scientific test, and so refusal to subject it to such a test is a big Red Flag. If you ignore those red flags and invest anyway, you’ll probably lose your money. That’s about the size of it….

Edit: since I just read a nice comment from Stuart at https://disqus.com/home/discus… where it looks like Stuart is totally missing the point that a real OU machine would be dead easy to test and to demonstrate. He’s answering about Mark D here:
So as you say Mark struggles to determine what constitutes an OU device, he struggles to understand how it works when he does see one, and he struggles to think outside the box without his instruments to tell him, because he is so conditioned to other factors influencing the way he thinks.”
“Mark Dansie and his associates need to learn to assess possible OU without using their instruments. To work using only his initiative and imagination.
Will he makes mistakes, of course he will. Just like when driving a car you make mistakes, making a wrong move which lands you in a near miss or other bother like thinking, damn I should have turned at the next corner not this one.
The main thing is to go and look at OU with an open mind, not a closed one.
The question should be “will it work” ? Not why it won’t work because I cannot work outside my laws of physics prison walls.”

No, Stuart, the question isn’t why it will work or won’t work. The question is does it demonstrably work, and then you can go into the question of why if you want to. Asterix gave a pretty good method of proving whether the Rosch device worked – stick it in the wilds somewhere just doing real physical work. My version of that to test an OU idea is to set it going and walk away from it leaving it running doing some real work. The longer it keeps on doing work, the more you have to accept that it’s not faked in some way. Measurements are the icing on the cake, but the body of it is that demonstration of an appreciable amount of work being done beyond what could be stored inside it with known methods (unknown methods may of course also be a major discovery even if the thing itself isn’t really OU). Pump water up into a reservoir or lift weights and you can see the amounts being lifted without any measurement kit around.

Even Yilidiz with his fan could have demonstrated this way, by just leaving it running untouched for any reasonable time. Put it on the table, set it running and walk away. After a few months any conceivable battery or other power source would have been exhausted and credibility would gradually arrive. Running a fan at around 8W or so for 5 minutes or so and then stopping it and handling it is bound to destroy credibility, when that amount of energy could be supplied by an easily-changeable coin-cell and a bit of sleight-of-hand.

So: first demonstrate that something indubitably works. Then you can get down to the measurements and quantify just how well it works and modify the theory that has worked well up until that point. Recognising a real OU device will actually be pretty easy when it does get shown. It’s also worth noting that PVs comply with these ideas pretty nicely, and explaining the photoelectric effect was quite a useful change to theory (thanks, Einstein). Is it hard to demonstrate a PV works without measuring kit? No, just connect it to a motor and show work being done – as long as there’s light it will work.

Can we get something like a PV working on any other part of the EM spectrum? Of course – it’s all the same principle. Things we can do with RF we can also do with light, X-rays and gamma rays if we get the device the right dimensions and construction. If we look at the power flux available and the efficiency with which we can convert it to electricity then we can predict just how much power we can get, so Steorn’s latest offering may indeed actually work this time. The cost of that power per kWh may however be somewhat excessive when compared to other sources and it may not work everywhere – may be a problem if you take it hiking and rely on it to charge your satellite phone. If you tell me it will supply kW in future, you have to demonstrate that and not just claim it. I expect it will be stuck in the sub-watt range since there’s not much more than that actually available.

I look forward to the time that someone shows a device that breaks the laws of physics we know. Time for yet another modification of the laws based on experimental proof, and some fun for people working out why. At any point in time, the laws are simply “the best we know so far” and are open to disproof. That’s the way science works.
-----



September 15, 2015

Recently I’ve had a couple of people contact me with some ideas for how to get “free energy” using buoyancy. I’ve also reviewed some ideas using motors and generators, and explained why I don’t expect them to work either. It’s maybe time to have a bit of a rant (again) about the difference between work and energy, and why I think that whereas Free Work may be attainable (and has been demonstrated), Free Energy is most unlikely to be shown. 

Let’s start with the paradox. There is the principle of Conservation of Mass/Energy, and since mass and energy are equivalent via E=MC² then in a closed universe we can’t change the amount of “stuff” we’ve got (mass/energy) but whatever we do we will end up with no more and no less. If we then look at a normal physics explanation of a process, we are told that we start with X joules of energy and we can do a little less than X joules of work, and then we have no energy left. This obviously doesn’t agree with the aforementioned CoE (conservation of energy). We really have a semantics problem, since the words we are using are not adequate. 

I’m going to separate the words out. We have two forms of energy here (kinetic and potential) and we also have work. Potential energy includes mass, energy that is stored in springs of some sort, gravitational potential (may be stored as mass, but I’ll skip that question for now) etc.. Kinetic energy is stored as things that are moving, so we include photons, moving masses and other unbound energy not stored as mass. Work is on the other hand a bit trickier. Whenever we do work, at the end of it things are just in a different configuration than before. Some work goes into kinetic energy (KE) and/or potential energy (PE), some of it is simply that a lump of stuff is a different shape or location. Hammering a lump of iron into a sword (or ploughshare) takes a lot of work, but at the end of it we have the same lump of iron in a different shape. Work is not a conserved quantity, and that is an important observation. 

Generally, when we’re talking of how much energy we’ve got, we don’t look at the whole amount available, since that would produce ridiculous numbers when we count up the number of joules in the various bits of stuff we’re looking at. Instead, what we look at is a local excess of energy over another place and call this what we’ve got. The total energy in a litre of Diesel is massive, but we look at what we get from combustion of it (that converts a very small quantity into KE in the form of heat) and how much more heat we have than the ambient temperature. To convert that heat energy into work we need to let that excess KE move into the ambient and we harness that movement to do the work of moving our car from one place to another. Since we start from stationary and end stationary (and normally end up in the same parking-spot at the end of the day), in fact all that KE we’ve liberated by burning ends up as heat in the atmosphere and we’ve actually done no work at all. Yep, it gets a bit complex when you really follow where all the energy goes to and what has really happened. 

I’ll try to restate all this a bit more simply, though. The sum of KE and PE remains constant no matter what you do. We get work done when we harness the movement of energy from one place to another. Energy is conserved, but work is not even though we use the same units to measure it. 

A local concentration of kinetic energy will naturally spread out until the energy density is even and without any local high concentrations. This can be seen using water – pour a glass of it into a bowl and you end up with a flat surface where no point is higher than another. Pour the glass of water through a turbine and you can get work done in the process, but you don’t have to do this so the work obtained can be anywhere from zero to (almost) the available excess energy when the glassful gets poured. 

Part of Quantum Theory says that there is a residual amount of kinetic energy left even at absolute zero, and that things will thus still move. Some people think therefore that this Zero-Point Energy (ZPE) should be able to be tapped. There’s some logic there, in that if something is moving then we should be able to make it do some work, but in the case of ZPE I suspect it’s a problem of how we measure things and that that that energy is not actually available but is instead imaginary. Just because we measure something to be in a slightly different position does not mean that it’s actually moved, but that since our measurements require us to use some sort of particle to hit the thing we can’t really be totally sure of the measurements. There’s an underlying uncertainty of where the fundamental particle actually is, which is a probability function. ZPE therefore seems to me extremely unlikely to be a source of new energy and thus to break CoE. 

With mechanical systems such as the motor/generators, gravity/buoyancy machines or electromagnetic systems, we start by putting some work in to get it going. That work gets stored somewhere as either KE or PE, and if we get work out of the machine then the available stored KE and PE reduces until it reaches zero and the machine stops. An equivalent system would be a bath full of water, where you think that if you pour in a glass of water to make it overflow then you’ll get a continuous stream of water out. It doesn’t happen – you just get a glassful out and then it stops overflowing. You can’t get more energy out than you put in. 

So far I’ve trashed pretty well all of the “traditional” Free Energy systems as being non-workable. All is not lost, though, since I’ve pointed out that work is not a conserved quantity. Since I’ve also pointed out that work can be subdivided into stored KE, stored PE and displacement, and it should be pretty obvious that a simple displacement is a zero-energy transaction, there is however a chance that we can get the displacement-type work (which is often what we want) for free. That displacement is a zero-energy transaction has been known since Newton, since he said that a body in motion would continue in that motion unless there was a force acting. So – if you lift something up you’ve put in work which is stored as gravitational potential energy, and if you put it down again you can get that energy out again as work, but if you move it sideways then no work is done except against friction, and that can be reduced arbitrarily asymptotic to zero. 

In looking at a purported Free Energy (or perpetual-motion) machine we should thus look at the total interplay over a cycle between KE, PE and the three kinds of work (Stored KE, stored PE, displacement). Count the joules in and out and decide how to assign them in those 5 buckets. Do this assiduously and you can decide for yourself whether any Free Energy invention will actually work. The majority show a zero-sum overall when they are idealised to zero losses, so in real lossy life will just slow down and stop. 

So, what does work? We know that solar panels work pretty well. We also know that if we have any cyclic motion or wave then we can rectify it and get some work done. One thing to remember is that the total energy doesn’t change – if we take some in one place it has to turn up in another. All we need to look for is a flow of energy that exists and to divert it so that it does what we want done before we let it go again. Displacement-type work does not take energy to perform, though there is an element of borrowing and returning from the energy bank. Energy is put in to start the motion, then the motion continues until we stop it and get the energy out again. That sword-to-ploughshare change is displacement-type work, in that no excess energy remains after the work has been done. The vast majority of what is normally regarded as work is displacement, where at the end of a cycle there is no work (KEwork or PEwork) actually done. Displacement work does not store any energy, and we should therefore be able to do it without using energy – all the energy we do put in ends up dissipated. At least we should be able to do it with a draw on the energy bank followed by a return of that energy. 

I think it’s possible to harvest infrared to get a reasonable amount of electrical power. The Robert Murray-Smith experiments showed that in a small way, and it may be possible to get the harvest up by quite a few orders of magnitude. For this, we’re stopping an IR photon (which is moving energy) and converting it to electrical energy (moving electron in this case) and then either using it in some movement (so it’s re-emitted) or storing it in a battery as chemical potential energy for a later release. Instead of going what path it wants to, we’re redirecting the path of that energy so that it does some work for us while we’re harnessing it. In a while I may have some data to present on performance of real devices. They are however just a wee bit hard to actually manufacture. 

What I’d like you to take away from this rant is that “energy” needs to be redefined as to whether it’s a local excess or overall, and whether it’s PE or KE, and that “work” should be subdivided into KEwork, PEwork and DisplacementWork (and remember that DisplacementWork is actually no work at all, takes no energy and should thus be Free Work). Getting the word definitions right should help in seeing whether any system does what it is claimed to. If you don’t have the language right and thus confuse things that should be separate, then you can’t think correctly and get the wrong answers. I’ve used a lot of words in order to try to get across an idea that is really quite simple but is normally obfuscated by the common language.
-----



August 22, 2014

We’ve been somewhat short on content for a while, but at this moment Mark Goldes and Ken Rauen are coming back to trying to get free work from ambient energy (and of course free money from donations). I don’t particularly want rants on the status of Aesop as replies here – we all know that a lot of money has disappeared without any working product and that the promises to get it right this time thus have a hollow ring, but let’s try to follow the rabbit-hole and see where it leads instead. Mark Goldes replied politely to my comment on PESN about this, so let’s try to keep the replies polite as well. Argue with the ideas, not the people.

ProellEffect

The start-point here is what Aesop Institute says at http://www.aesopinstitute.org/no-fuel-piston-engines.html which is unfortunately coming through Scribd so might not display on some peoples’ screens. Oh well – the gist is that Ken Rauen has thought up a gas-cycle that works using a heat source but no heat sink, and has developed the original 10-piston idea to one that only needs 4 pistons. Also worth looking at is Ken Rauen’s ideas on this from 10 years ago at http://www.pureenergysystems.com/academy/CarnotExcedence/ (or do your own search). Worth noting that it was said to work at that time, and that in the intervening years no-one has verified that it works.

Let’s start with some concepts of temperature. In a gas, the macroscopic energy stored at a certain temperature depends on the heat capacity of that gas. This heat capacity depends on the translation, rotational and vibrational velocity of the gas molecules, and thus different gases will contain a varying amount of joules of energy per litre when they are the same measured temperature. In a gas-cycle engine, we don’t really need to worry too much about things other than the velocity of the gas molecules and the gas will be the same at all points in the process (no chemical combinations), and we could use a monatomic gas for simplicity. Pressure is just molecules hitting the walls of the container, and they will recoil on average at the same speed that they hit the walls unless the walls are either moving or are hotter/cooler than the gas. At atmospheric pressure, any one molecule in air at STP collides at around 7GHz, so we don’t normally notice the random variations in pressure caused by the individual collisions.The directions are random, and the speeds follow a Boltzmann distribution. Incidentally it’s worth noting that the smaller the microphone and the lower the air-pressure, the more you notice the random variations as white noise – although this makes an effective lower limit on a useful size for a spy microphone before the noise gets too great, this can itself be used as a source of energy from the ambient.

By Newton’s laws, which are accurate enough at the ~500m/s of a gas molecule at STP (and still apply relativistically though they become a bit more complex), collisions will conserve energy and momentum. Although in reality the walls are also made of atoms and will thus exchange energy with the gas, let’s have an ideal piston and cylinder to start with that does not alter the gas energy in any way – the real situation introduces losses so an ideal container will be the best system possible.

Compressing the gas puts energy in since the wall is moving and the molecules will rebound with a higher kinetic energy, and we measure this increased kinetic energy as higher temperature and pressure. Similarly we get that energy out again by the expansion of the cylinder and the gas will cool down – all nice and perfect and ideal at the moment and those laws of conservation of energy and momentum will always apply even as we use real-world rather than ideal conditions.

If you put energy in as heat, then that is shared amongst the gas molecules’ kinetic energy and will increase the measured pressure. You can get this energy out either as joules of heat or as joules of work (or a combination of both) but providing those laws of conservation still hold then in any cycle you can’t get out more than you’ve put in. The important word here is cycle, since any piston engine will run a cycle and will be at particular conditions at any specified point.

So, can you produce a “cold point” using pistons such that that gives you a flow of heat energy from ambient, and from that flow of energy you can extract the work? To create a “cold spot” you can use adiabatic expansion – where we start at ambient temperature this will cost us work. The amount of work we can then get by using up the flow of energy from ambient to this “cold spot” is exactly the amount of work we’ve just put in – and that’s if it’s all ideal. This is the big Ooops in the idea, and why it can’t work to get energy from the ambient. Note that Ken Rauen built and tested a motor that used the Proell effect a decade ago. He said it worked, but I’ve never seen anyone confirm that. If it worked, it should just keep working (and giving you work out and cooling down) until it wore out. Obviously it didn’t. If it had done, then he would have demonstrated it widely (as would Mark Goldes) and have got a Nobel prize at least. The Proell effect seems to be a non-effect. By building it better with lower friction and better insulation he could make it spin for a longer time before it stopped, but the same can also be said about magnet motors. You can get it arbitrarily close to 100% if you spend enough time and (someone else’s) money, but you can’t get that over-100% out of it.

To analyse any energy machine, we need to look at what the energy flows are in joules at all points – if we look at temperatures then we can make errors, but count the joules in and out and things become obvious. It also helps if, instead of using equations naked, you put in actual amounts of gas etc. that you want to use so that you have real numbers to work with.

So far I haven’t mentioned the laws of thermodynamics. I don’t need to – this works on simple mechanics of collisions but just happens to end up agreeing with thermodynamics. What this says is that any idea of trying to use ambient energy (single heat source/sink) in a motor with pistons just isn’t going to work – while momentum is a conserved quantity you can’t do it with pistons or macroscopic structures. In a real situation, it won’t even break even – the losses will stop it dead. I had a discussion with Bill a while back on one of Tesla’s ideas that used a similar proposition of creating a cool sink and then having an almost Carnot-efficiency motor using the difference between that and ambient and thus converting all the input energy to work – again it seems logical until you go work out what happens to the energy flows.

OK, so that’s knocked out the idea of a piston engine possibly using ambient energy to do useful work for us. Are there sneakier ways? About a year and a half ago I put up one idea here that skews the probabilities of passing a barrier. Nope, I haven’t managed to build one of these and see if it really works or not, though I have thought of a possible way to get the accuracy needed. One of these days…. Since it would do a ridiculously small amount of work and is difficult to make, it’s only value is to annoy the people who say it can’t work and to satisfy myself that it does. OK – if it does….

Note that if you are a bit creative then the ambient temperature can be split into two sections. One is the ambient conductive energy of the air or other material substance, and the other is the temperature of the sky. Say ambient air temperature is 20°C. I’ve measured blue-sky temperature (using an IR thermometer) at -35°C in summer here. There is a temperature difference there we could exploit with a piston engine with atmosphere as the hot side and a radiator with a parabolic mirror pointed at the sky as the cold side. Clouds, by the way, measure somewhat warmer and could be somewhere round -2°C, but that varies a lot. You’d need a big radiator and mirror to get any reasonable amount of power, and the whole thing will be somewhat large for the power produced. It’ll be free energy from ambient, but would probably cost you more in depreciation and maintenance than using power from the wall.

Energy moves around. In the course of energy moving from one location to another we can get out some work. Something will move from one place to another, or something gets heated, or something changes shape. The sorts of things we loosely call “using energy”. Since mass/energy is a conserved quantity, of course we can’t use it up. We can’t make it or destroy it, we can only move it from one place to another. At the end of the work you’ve done, exactly the same amount of mass/energy remains as you started with, but it’s in a different configuration (more spread-out, and thus higher entropy). As I’ve said before, there’s a bit of a language problem that affects our thinking on work and energy.

One important point that isn’t much noticed is that all flows of heat are in two directions (or more). While the hot body will radiate more to the cold body, that cold body is still radiating energy to the hotter one – replace the cold body with a colder one and you can measure the increased heat loss from the hotter body. Thus even at ambient temperature, there must be a heat flow in both directions and thus if you can interrupt just one direction of flow you have the energy-flow needed to get work out. Take a while thinking about that one before you automatically reject it. Without that radiation, the IR thermometers wouldn’t work.

Let’s say we have a white-hot radiating body and we take some selected spectrum of that suitable for photoelectricity to illuminate a PV. The PV gives us electricity – we know that works. If instead of the PV we had a photoelectric effect that worked on long-wave IR, and again use the right spectrum for it, then again we’d expect it to work. If the PV worked on the IR we get at around 300K, then we could expect an electrical output from that, too, and that’s around ambient temperature. By it’s nature it won’t be a high voltage per layer (of the order of maybe 10mV) but you could have a lot of layers in series. We may not be able to yet make such a semiconductor layer but it should at some point be possible. If it only works at a higher temperature than ambient (say 300°C), it would still be almost 100% efficient at converting heat to electricity which would be worth it. Maybe it’s actually possible to use ambient energy to do work, providing we don’t try to do it using thermodynamics. Just don’t expect to be able to do it with a piston engine and a single heatsink – it needs something sneakier.
-----


Free Energy 

May 10, 2013

I suppose I was around 12 years old when I built my first “free energy” device. An aerial wire, a coil, a variable capacitor, a diode and a crystal earpiece. I could listen to the BBC’s Third Program using it. At Droitwich there was a massive aerial system I’d seen from the train in passing, and it’s said that people living close enough to it listened to the Third Program through the fillings in their teeth picking up the radio waves.

Wherever there are waves, and we can put in some device that changes the probability of the wave passing in one direction as opposed to the other (some form of diode) then we’ll be able to harvest energy. Some waves we can get energy out from in both directions – the equivalent of a bridge-rectifier. Ocean waves, sound waves, radio waves, light, heat… you find a lot of waves in Nature, and they are all sources of free energy if we can convert them to a version we can use.

In a normal gas-filled heat engine, the molecules are all bouncing off each other producing the pressure. Put more heat in and the molecules bounce faster and the pressure goes up. Allow one wall of the cylinder to move (use a piston) and that pressure can move the piston, reduce the pressure, and the random motion of the gas molecules (heat energy) gets converted into nice linear motion of the piston. In the process, since the gas molecules will bounce off the piston at a lower velocity than they hit it at, they will cool down. Is there another way to use this effect? We want to use single molecules so that Newton’s laws of motion apply, and energy is thus transferred by collisions – the molecules cool down and we harvest the energy.

Around a dozen years ago, our design group was tasked to produce a couple of patent applications. Since we were basically involved in redesigning boards to reduce cost and improve reliability, we could only find one real copier-related idea, so that gave us only half the task done. That patent was granted after the site shut down and my job disappeared to Hungary (much cheaper there), but that’s maybe another story. At the time, therefore, we had to put another patent idea in. I thus came up with a perpetual motion machine as the other. I’ve had this open on the net for a long time now, so anyone who manages to make one can do so, but it won’t be patentable as such.


Imagine a piece of gold-leaf that is pierced with holes that are funnel-shaped. An air molecule that is on the left in the diagram has a better chance of getting through the holes than one on the right moving left, since whereas a hit on the inside of the funnel tends to make the hole bigger (and guide the molecule towards the hole), a hit on the outside of the funnel tends to make the hole smaller. In order for this to work the gold-leaf must be thin and thus flexible in the funnel, and the holes should be slit-shaped and of the order of size of the mean-free-path of the gas molecules. We have changed the usual probability, of the gas molecules passing through the holes in either direction equally, to one where they have more chance passing left to right. Since we are dealing with large numbers of molecules, this translates as a higher pressure on the right, and the equilibrium happens when the higher pressure on the right sends the same number of molecules right-to-left as left-to-right.

The gold-leaf thus generates a movement of gas from one side to the other and an unequal pressure on either side. If this pierced gold-leaf is then mounted in the vanes of a windmill, the windmill will turn without any energy being input – the energy to do this comes from the heat energy in the gas.



Connect this to a generator and we would generate electricity from the energy in the air, which would of course cool down.

This idea generates electricity of the order of nanowatts, but after that it’s only a matter of scale.

Note that the gas WILL cool down – the energy has to come from somewhere.

This breaks the 2nd  Law of Thermodynamics since energy will move from a colder body (the air) to a hotter body (whatever you connect the output to) without any work being input to the system.

OK, then how about another method that maybe gives us more energy? Although the windmill should work, it’s not exactly world-shattering power output. Sorry, but this needs a bit of maths, so I’ll start off by defining the data I’ll be using:

(Figures here from ICAO atmosphere, as stated in Tennent’s Science Data Book, rounded to 2 decimal places since this is an order-of-magnitude calculation)

2.55E+25 – molecules per cubic metre

1.225Kg/m³ – density of air

6.63E-8m – 66.3nm – mean-free-path: Λ

6.92E+9 – collisions per second (about 6.9GHz hit-rate): R

4.80E-26Kg –  average mass of an air  molecule: M

459m/s – velocity of molecule: V

(other useful number – Avogadro’s number 6.02E+23)

Derived numbers:

5.06E-21J – approximate energy of a molecule: E (=1/2 M V²)

2.90E+14 – number of mean-free-path diameter circles in 1m : Q (= 4/πΛ²)

These are extreme numbers relative to daily life, and even very small volumes of gas have large numbers of molecules and collisions, thus any individual collision is difficult to distinguish. With a very small microphone, though, the fluctuations do give an increased noise figure so we can see that the smaller the microphone the bigger the noise signal will be, and you can’t design around that. This snippet of information is well-known to sound engineers. 

When a molecule hits the wall of its container, it rebounds off at the same speed on average, since the collision is perfectly elastic, thus it retains its temperature. Of course, if it hits a warmer body, it will gain energy and get “hotter” (faster), and if it hits a colder body it will lose energy and get “colder” (that is, slower). These energy transfers are continually happening, since there is a range of velocities of the gas molecules, and they are all hitting each other and transferring energy at an average rate of around 6.9GHz at room temperature. This is important, since we will be dummying a molecule into believing it’s hitting another one, and is thus giving its energy to us instead. 

A portion of the wall, of diameter around the mean free path of an air molecule, will receive around  the velocity, divided by the mean-free-path, hits per second. This is 459 divided by 6.63E-8, or 6.9E+9 hits per second (about 7GHz).

If we can absorb the energy from the molecule, by using a small microphone of diameter 0.07 micron, we will thus get a hit rate of the order of 7GHz. Of course not all the energy can be absorbed since (a) there is a range of angles of incidence giving about one third of the total energy in the direction we want, (b) the microphone will probably not be very efficient at converting mechanical to electrical energy – say we can get 30% efficiency here. Overall harvest-efficiency will be of the order of 10%….


…or about 10 kW using the given figures (I’m interested in ballpark numbers for now). Actually quite a surprisingly large amount.

So approximately 10kW per square metre is available, from which we could probably harvest about 1kW per square metre once we’ve got the engineering sorted.

In order to harvest the energy, we need a small microphone. This would need to be made in nanoengineering using probably piezo technology. We get a signal from this when it is hit by a molecule, and thus the molecule rebounds at a lower energy. Recently I’ve seen that we can get diodes that will rectify a signal of this frequency, so instead of heat as the output we could in fact get electricity, but stick with the heat output for now since it nicely shows 2nd law to be breakable.


The piezo element should be of the order of size of the mean-free-path. Note that we can use a bigger piezo element if we use helium as the gas, which has a mean-free-path of 3 times that of air, so we could use a piezo of about 0.2 micron as opposed to that of air of 0.07 micron. That may make this a bit more usable. With the longer mean-free-path, the frequency goes down to around 2.3GHz, and here it’s also easier to use a diode to rectify it and add the outputs from a number of piezo elements to drive a load. We would also have less energy available from Helium, since the energy per molecule remains the same whilst R and Q are both smaller. I need to get more data on Helium to calculate the ratio, but the increase in dimensions may make it worth it, depending upon the manufacturing difficulties. 0.07 micron is right at the limit of current fab technology.

Per element, we have the kinetic energy of the molecule times the hit-rate, which is E times R or about 3.5E-11 watts incident on each piezo. Restated, that’s 35 picowatts. It takes a lot of elements to get a reasonable amount of power, but it is free. A 1kW array will be around a square metre and would currently have an astronomical price, but it’s a one-off cost and after that you just harvest energy from the air.

Note that conservation of energy is observed within our array – we are just transferring it from one side to the other. Conversion between mechanical and electrical energy is a normal occurrence.

If we build an array of these elements, then we have a system where the gas is cooled on the side of the piezos and heated on the side of the resistors, and this breaks the 2nd Law of Thermodynamics  since energy is moved from a cooler to a hotter body without using energy to do it.

If you choose the resistors and heat as output, then you can of course use a Peltier block to get electricity, but at lower output than the diode version. If the array is put into the wall of an insulated container with the piezos on the inside, then the inside will cool down. A useful no-energy fridge. You can also build a room-heater or an air-conditioner depending upon configuration.

The laws of thermodynamics break down when we look at things on the nanoscale where the particulate nature of matter becomes important. This can be used to do work without needing energy to do it. In physics, we need to re-evaluate the equivalence of work and energy, since they cannot be the same thing. Energy is a finite resource, whereas work is an infinite resource (this can be seen in any office, as well). Trouble is that, since they use the same units and since in normal systems they absolutely equate to each other, they are seen as being the same thing and that is also what is taught in the schools and universities.

It ain’t necessarily so…

I’ll probably get complaints that you just can’t break the Laws of Thermodynamics. With the ideas put forward above, though, you have a choice. If they work, they break 2LoT, but if they don’t then they break Newton’s laws of motion. Which Law would you suggest is more easily broken? Thermodynamics was worked out using the idea of heat as a perfect fluid, without granularity, whereas the real world has molecules that have a certain size, so at the molecular scale the original calculations do not match reality since the built-in assumptions are invalid.

Comments