volt-nuts@lists.febo.com

Discussion of precise voltage measurement

View all threads

Curious overvoltage event

AB
Andrea Baldoni
Mon, Aug 10, 2015 1:34 PM

Dear fellow experimenters, hello.

Yesterday, in the zone near my lab, the weather forecast was of a big storm.
I was here doing some work when the sun got covered by clouds and started
a strong wind. The neon lights went away for a fraction of a second, not
enough for the PC to reset, then, something like ten seconds later, the lights
went out, I heared a multiple clicking sound from the electric switch box, then
the UPS alarm.
I thought it was a blackout, but no, the UPS was in "overload alarm", there
was smell of something burned and all chain of three automatic switches
powering it tripped.

It turned out that the power supply of my server, protected by the "line
interactive" UPS burned out. I opened it, and the chain of resistors and
diodes giving the startup power to the control IC UC3843 arched and the IC
itself exploded (more details on the circuit later on request, I still didn't
reverse engineer it).
To this point, nothing exceptional: an overvoltage came from the line and,
even if nothing else of the many devices in the lab (fortunately) had been
affected, this power supply died. I didn't saw any lightning however and no
thunder.

The strange thing came this morning, when a customer, having the office
19Km far from my lab in a zone populated enough to have between us some small
towns and industrial areas, called because his server was off. Guess? Exactly
the same power supply, exactly the same components arched, and the control IC
exploded in the same way. Also in their office, no other equipment was
affected, everything was running and the UPS didn't went into overload alarm
(or maybe it did and recovered by itself, I don't know in effect).

Surely we are on the same grid, even if the transformers are obviously
different, but despite I have other customers in the zone (none with the same
power supply model), I didn't get any other call (while usually I do, when a
lightning strike), and, no one reported to me to have had problems with other
equipment.

Now I just had the idea to check the logs of the servers to see if they powered
off at the same time. They did not, the one of my customer powered off
17 hours earlier, when the storm was still far away (it's unlikely the sudden
power off erased 17 hours of logs).

Could be just a coincidence?

Best regards,
Andrea Baldoni

Dear fellow experimenters, hello. Yesterday, in the zone near my lab, the weather forecast was of a big storm. I was here doing some work when the sun got covered by clouds and started a strong wind. The neon lights went away for a fraction of a second, not enough for the PC to reset, then, something like ten seconds later, the lights went out, I heared a multiple clicking sound from the electric switch box, then the UPS alarm. I thought it was a blackout, but no, the UPS was in "overload alarm", there was smell of something burned and all chain of three automatic switches powering it tripped. It turned out that the power supply of my server, protected by the "line interactive" UPS burned out. I opened it, and the chain of resistors and diodes giving the startup power to the control IC UC3843 arched and the IC itself exploded (more details on the circuit later on request, I still didn't reverse engineer it). To this point, nothing exceptional: an overvoltage came from the line and, even if nothing else of the many devices in the lab (fortunately) had been affected, this power supply died. I didn't saw any lightning however and no thunder. The strange thing came this morning, when a customer, having the office 19Km far from my lab in a zone populated enough to have between us some small towns and industrial areas, called because his server was off. Guess? Exactly the same power supply, exactly the same components arched, and the control IC exploded in the same way. Also in their office, no other equipment was affected, everything was running and the UPS didn't went into overload alarm (or maybe it did and recovered by itself, I don't know in effect). Surely we are on the same grid, even if the transformers are obviously different, but despite I have other customers in the zone (none with the same power supply model), I didn't get any other call (while usually I do, when a lightning strike), and, no one reported to me to have had problems with other equipment. Now I just had the idea to check the logs of the servers to see if they powered off at the same time. They did not, the one of my customer powered off 17 hours earlier, when the storm was still far away (it's unlikely the sudden power off erased 17 hours of logs). Could be just a coincidence? Best regards, Andrea Baldoni
CH
Chuck Harris
Mon, Aug 10, 2015 2:22 PM

It is a common failure in many switching power supplies to expect that
the power line voltage will snap on quickly, and snap off quickly when
the supply is activated, or deactivated.

If the mains supply hangs around in certain brown out voltage ranges, it
can fool the start up control circuitry into exceeding the ratings on
the bootstrap circuitry, or the inrush limiting circuitry, and toast parts.

Better supplies have timers that make sure that the inrush protection,
or boot strap power only last for the amount of time expected, and
shut down if those values are exceeded.

The usual UPS can be complicit in these failures because they just pass
the mains power through to the load, and don't switch over to inverter power
until a programmed amount of brown out has occurred... if at all.

Basically they pass the crappy brown out power on to the load.

-Chuck Harris

Andrea Baldoni wrote:

Dear fellow experimenters, hello.

Yesterday, in the zone near my lab, the weather forecast was of a big storm.
I was here doing some work when the sun got covered by clouds and started
a strong wind. The neon lights went away for a fraction of a second, not
enough for the PC to reset, then, something like ten seconds later, the lights
went out, I heared a multiple clicking sound from the electric switch box, then
the UPS alarm.
I thought it was a blackout, but no, the UPS was in "overload alarm", there
was smell of something burned and all chain of three automatic switches
powering it tripped.

It turned out that the power supply of my server, protected by the "line
interactive" UPS burned out. I opened it, and the chain of resistors and
diodes giving the startup power to the control IC UC3843 arched and the IC
itself exploded (more details on the circuit later on request, I still didn't
reverse engineer it).
To this point, nothing exceptional: an overvoltage came from the line and,
even if nothing else of the many devices in the lab (fortunately) had been
affected, this power supply died. I didn't saw any lightning however and no
thunder.

The strange thing came this morning, when a customer, having the office
19Km far from my lab in a zone populated enough to have between us some small
towns and industrial areas, called because his server was off. Guess? Exactly
the same power supply, exactly the same components arched, and the control IC
exploded in the same way. Also in their office, no other equipment was
affected, everything was running and the UPS didn't went into overload alarm
(or maybe it did and recovered by itself, I don't know in effect).

Surely we are on the same grid, even if the transformers are obviously
different, but despite I have other customers in the zone (none with the same
power supply model), I didn't get any other call (while usually I do, when a
lightning strike), and, no one reported to me to have had problems with other
equipment.

Now I just had the idea to check the logs of the servers to see if they powered
off at the same time. They did not, the one of my customer powered off
17 hours earlier, when the storm was still far away (it's unlikely the sudden
power off erased 17 hours of logs).

Could be just a coincidence?

Best regards,
Andrea Baldoni


volt-nuts mailing list -- volt-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/volt-nuts
and follow the instructions there.

It is a common failure in many switching power supplies to expect that the power line voltage will snap on quickly, and snap off quickly when the supply is activated, or deactivated. If the mains supply hangs around in certain brown out voltage ranges, it can fool the start up control circuitry into exceeding the ratings on the bootstrap circuitry, or the inrush limiting circuitry, and toast parts. Better supplies have timers that make sure that the inrush protection, or boot strap power only last for the amount of time expected, and shut down if those values are exceeded. The usual UPS can be complicit in these failures because they just pass the mains power through to the load, and don't switch over to inverter power until a programmed amount of brown out has occurred... if at all. Basically they pass the crappy brown out power on to the load. -Chuck Harris Andrea Baldoni wrote: > Dear fellow experimenters, hello. > > Yesterday, in the zone near my lab, the weather forecast was of a big storm. > I was here doing some work when the sun got covered by clouds and started > a strong wind. The neon lights went away for a fraction of a second, not > enough for the PC to reset, then, something like ten seconds later, the lights > went out, I heared a multiple clicking sound from the electric switch box, then > the UPS alarm. > I thought it was a blackout, but no, the UPS was in "overload alarm", there > was smell of something burned and all chain of three automatic switches > powering it tripped. > > It turned out that the power supply of my server, protected by the "line > interactive" UPS burned out. I opened it, and the chain of resistors and > diodes giving the startup power to the control IC UC3843 arched and the IC > itself exploded (more details on the circuit later on request, I still didn't > reverse engineer it). > To this point, nothing exceptional: an overvoltage came from the line and, > even if nothing else of the many devices in the lab (fortunately) had been > affected, this power supply died. I didn't saw any lightning however and no > thunder. > > The strange thing came this morning, when a customer, having the office > 19Km far from my lab in a zone populated enough to have between us some small > towns and industrial areas, called because his server was off. Guess? Exactly > the same power supply, exactly the same components arched, and the control IC > exploded in the same way. Also in their office, no other equipment was > affected, everything was running and the UPS didn't went into overload alarm > (or maybe it did and recovered by itself, I don't know in effect). > > Surely we are on the same grid, even if the transformers are obviously > different, but despite I have other customers in the zone (none with the same > power supply model), I didn't get any other call (while usually I do, when a > lightning strike), and, no one reported to me to have had problems with other > equipment. > > Now I just had the idea to check the logs of the servers to see if they powered > off at the same time. They did not, the one of my customer powered off > 17 hours earlier, when the storm was still far away (it's unlikely the sudden > power off erased 17 hours of logs). > > Could be just a coincidence? > > Best regards, > Andrea Baldoni > _______________________________________________ > volt-nuts mailing list -- volt-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/volt-nuts > and follow the instructions there. >
AB
Andrea Baldoni
Mon, Aug 10, 2015 2:47 PM

On Mon, Aug 10, 2015 at 03:34:59PM +0200, Andrea Baldoni wrote:

interactive" UPS burned out. I opened it, and the chain of resistors and
diodes giving the startup power to the control IC UC3843 arched and the IC
itself exploded (more details on the circuit later on request, I still didn't
reverse engineer it).

I was wrong, it's not the chain of resistors for startup. There are two 900V
MOSFETs, S&D paralleled, each one with a 1N4148 and a 330 ohm resistor
(SMD) in parallel to get gate drive from a common line, coming from the
exploded UC3843. Obviously the transistors actually are SD shorted (one is
also G shorted, the others have various resistances).
The sources are connected to GND through a 0.17 ohm power resistor, now ashes.

It seems that the starting event could have been a transistor drain-to-gate
short, that itself, assisted by a back EMF from the transformer, made gate
arching to source pads on the PCB (connected to GND) and all the rest.
At this point, despite the fact that another customer (other 20-something Km
the other side) just called because his router power supply burned yesterday,
probably it was just a coincidence.
What do you think about?

Best regards,
Andrea Baldoni

On Mon, Aug 10, 2015 at 03:34:59PM +0200, Andrea Baldoni wrote: > interactive" UPS burned out. I opened it, and the chain of resistors and > diodes giving the startup power to the control IC UC3843 arched and the IC > itself exploded (more details on the circuit later on request, I still didn't > reverse engineer it). I was wrong, it's not the chain of resistors for startup. There are two 900V MOSFETs, S&D paralleled, each one with a 1N4148 and a 330 ohm resistor (SMD) in parallel to get gate drive from a common line, coming from the exploded UC3843. Obviously the transistors actually are SD shorted (one is also G shorted, the others have various resistances). The sources are connected to GND through a 0.17 ohm power resistor, now ashes. It seems that the starting event could have been a transistor drain-to-gate short, that itself, assisted by a back EMF from the transformer, made gate arching to source pads on the PCB (connected to GND) and all the rest. At this point, despite the fact that another customer (other 20-something Km the other side) just called because his router power supply burned yesterday, probably it was just a coincidence. What do you think about? Best regards, Andrea Baldoni
AB
Andrea Baldoni
Mon, Aug 10, 2015 3:50 PM

On Mon, Aug 10, 2015 at 10:22:47AM -0400, Chuck Harris wrote:

If the mains supply hangs around in certain brown out voltage ranges, it
can fool the start up control circuitry into exceeding the ratings on
the bootstrap circuitry, or the inrush limiting circuitry, and toast parts.

Hello Chuck. Thank you for your idea.
As I wrote later, I was wrong and the involved circuitry is another one; even
if at this point it could be not a mains-related failure, it's still a curious
thing that two PS failed in exactly the same way in such little time.

Best regards,
Andrea Baldoni

On Mon, Aug 10, 2015 at 10:22:47AM -0400, Chuck Harris wrote: > If the mains supply hangs around in certain brown out voltage ranges, it > can fool the start up control circuitry into exceeding the ratings on > the bootstrap circuitry, or the inrush limiting circuitry, and toast parts. Hello Chuck. Thank you for your idea. As I wrote later, I was wrong and the involved circuitry is another one; even if at this point it could be not a mains-related failure, it's still a curious thing that two PS failed in exactly the same way in such little time. Best regards, Andrea Baldoni
CH
Chuck Harris
Mon, Aug 10, 2015 7:29 PM

I have seen plenty of 900V FET's pop on such supplies.  A couple
of reasons were usually the cause.  In one instrument, there was
obvious signs of water infiltration that allowed a water short
between the FET's terminals and the chassis... POP!  Not likely
your problem...

In most others, there is a copper colored flash on the bottom of
the circuit board, characteristic of a high joule discharge that
has sputtered copper vapor against the bottom of the board... POP!

Look at the bottom of your board.  There are several pinch points
where the layout has been designed to absorb power line transients.
They are right near where the AC comes onto the board.  If there
is copper flash on the board around them, that would indicate a
bad line surge caused the problem... if not, it was something else.

I have found spider remains to be the cause of some such failures...
and the odd Brown Chinese Marmorated Stink bug.  They are at the
wrong place at the wrong time, and... POP!

After the surge has killed the FET's, the supply's next attempt
at starting up will usually kill the bootstrap resistors.

9 times out of 10, just replacing the FET's, the bootstrap resistors,
and cleaning the copper flash off of the circuit card will restore
operation.  That 10th time will usually be a real bear to fix.

Pay careful note to the position of the little copper foil gizmo
that is between the FET's and the heatsink.  It is important,
and must be put back correctly for proper operation.  It is supposed
to be isolated from the heatsink, and the FET's.

-Chuck Harris

Andrea Baldoni wrote:

On Mon, Aug 10, 2015 at 10:22:47AM -0400, Chuck Harris wrote:

If the mains supply hangs around in certain brown out voltage ranges, it
can fool the start up control circuitry into exceeding the ratings on
the bootstrap circuitry, or the inrush limiting circuitry, and toast parts.

Hello Chuck. Thank you for your idea.
As I wrote later, I was wrong and the involved circuitry is another one; even
if at this point it could be not a mains-related failure, it's still a curious
thing that two PS failed in exactly the same way in such little time.

Best regards,
Andrea Baldoni


volt-nuts mailing list -- volt-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/volt-nuts
and follow the instructions there.

I have seen plenty of 900V FET's pop on such supplies. A couple of reasons were usually the cause. In one instrument, there was obvious signs of water infiltration that allowed a water short between the FET's terminals and the chassis... POP! Not likely your problem... In most others, there is a copper colored flash on the bottom of the circuit board, characteristic of a high joule discharge that has sputtered copper vapor against the bottom of the board... POP! Look at the bottom of your board. There are several pinch points where the layout has been designed to absorb power line transients. They are right near where the AC comes onto the board. If there is copper flash on the board around them, that would indicate a bad line surge caused the problem... if not, it was something else. I have found spider remains to be the cause of some such failures... and the odd Brown Chinese Marmorated Stink bug. They are at the wrong place at the wrong time, and... POP! After the surge has killed the FET's, the supply's next attempt at starting up will usually kill the bootstrap resistors. 9 times out of 10, just replacing the FET's, the bootstrap resistors, and cleaning the copper flash off of the circuit card will restore operation. That 10th time will usually be a real bear to fix. Pay careful note to the position of the little copper foil gizmo that is between the FET's and the heatsink. It is important, and must be put back correctly for proper operation. It is supposed to be isolated from the heatsink, and the FET's. -Chuck Harris Andrea Baldoni wrote: > On Mon, Aug 10, 2015 at 10:22:47AM -0400, Chuck Harris wrote: > >> If the mains supply hangs around in certain brown out voltage ranges, it >> can fool the start up control circuitry into exceeding the ratings on >> the bootstrap circuitry, or the inrush limiting circuitry, and toast parts. > > Hello Chuck. Thank you for your idea. > As I wrote later, I was wrong and the involved circuitry is another one; even > if at this point it could be not a mains-related failure, it's still a curious > thing that two PS failed in exactly the same way in such little time. > > Best regards, > Andrea Baldoni > _______________________________________________ > volt-nuts mailing list -- volt-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/volt-nuts > and follow the instructions there. >