Well, to be fair, the message you found does go into some detail about the problem and I happened to sidestep it by patching gcc to allow things to link without specifying a specs fragment. It's much easier to run configure on 3rd party libraries when you don't have to add extra cruft just to get the configury to accept that the compiler actually can create binaries.tristero wrote:Apparently I was mistaken - thats the last time I trust what I read on the Internet.tristero wrote: Well, given that the FSF themselves admit that gcc4.3.0 has problems building newlib for "machines that are close to the metal", I'm somewhat surprised that you don't get the problems yourself.![]()
This section of the message at http://gcc.gnu.org/ml/gcc/2008-03/msg00515.html
Since either the libgloss layer I added to implement pseudo devices for IO or some equivalent needs to be linked in order to create binaries at all I chose to make gcc link this by default, specifically to allow link tests to work when configuring other libraries. By a happy co-incidence this happens to be that same workaround for the libstdc++ configury problem.Brian Dessent wrote: One proposed remedy was apparently to have the user add libgloss to their tree, and add code to detect this and pass the appropriate flags down when configuring target libraries, allowing link tests to work. But there was objection to this because people were uncomfortable with letting link tests succeed without the user having specified a BSP, since that increases the potential for a user to build a toolchain whose configuration does not match with what the actual hardware supports.
Actually that's something that slipped my mind completely, when a build fails after partial completion it's often not in a state where the compile can be resumed. Cleaning out a failed build is one of those things that you end up doing automatically & forgetting to tell other people to do, sorry.At your advice, I started again from scratch, grabbed the latest cvs checkins and built the whole thing from scratch, and it went through without problems (on Intel - haven't done PPC yet). I can only assume that I had *still* had partial builds in my system somehow previously (perhaps after I got the first problem, and went to get gmp/mpfr).
This is harmless - strip knows what it can deal with and fails gracefully. On the other hand this is something I always meant to go back and see if I could fix just because it irritates me tooThere are still niggling things being reported which are obviously problems:
strip: can't process non-object and non-archive file: /projects/dkp23/devkitARM/bin/arm-eabi-gccbug
strip: can't map file: /projects/dkp23/devkitARM/libexec/gcc/arm-eabi/4.3.0/install-tools (Invalid argument)
the first one is a shellscript, the second a directory. Anything that tries to run strip on them is making a mistake.

Congratulations! I bet that feels goodAnd getting to that point means that all the other builds worked fine.

I hate those - I suffered something similar on a linux box a couple of years ago which turned out to be memory going bad. Building gcc and associated tools tends to stress a system rather a lot. As you say, it could also be other processes getting in the wayOk, I have to say, all bets are off. I just did the PPC build (from scratch) and it barfed somewhere different. it actually complained about 'bad assembler option "r"' which is just insane. After a few more attempts (which all died at the same place) I decided to restart the terminal process just to see if some process limit had been hit.
Bingo.
Started a new terminal, restarted the build process and it got past the sticking point and kept on rocking.